Sample records for provide optimized sensitivity

  1. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  2. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  3. CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Bevill, M.

    1995-01-01

    Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.

  4. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  5. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  6. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  7. Optimal replenishment and credit policy in supply chain inventory model under two levels of trade credit with time- and credit-sensitive demand involving default risk

    NASA Astrophysics Data System (ADS)

    Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit

    2018-03-01

    Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.

  8. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  9. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  10. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  11. Opitmal Platform Strategies in the Smartphone Market

    NASA Astrophysics Data System (ADS)

    Unno, Masaru; Xu, Hua

    In a smartphone market, smartphone makers encourage smartphone application providers (AP) to create more popular smartphone applications through making a revenue-sharing contract with AP and providing application-purchasing support to end users. In this paper, we study revenue-sharing and application-purchasing support problem between a risk-averse smartphone maker and a smartphone application provider. The problem is formulated as the smartphone makers's risk-sensitive stochastic control problem. The sufficient conditions for the existence of the optimal revenue-sharing strategy, the optimal application-purchasing support strategy and the incentive compatible effort recommended to AP are obtained. The effects of the smartphone makers's risk-sensitivity on the optimal strategies are also discussed. A numerical example is solved to show the computation aspects of the problem.

  12. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  13. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  14. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  15. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.

    PubMed

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-02-24

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.

  16. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems

    PubMed Central

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-01-01

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564

  17. [Research on optimization of mathematical model of flow injection-hydride generation-atomic fluorescence spectrometry].

    PubMed

    Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li

    2014-01-01

    Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.

  18. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  19. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  20. Shape optimization of three-dimensional stamped and solid automotive components

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.; Yang, R.-J.; Bennett, J. A.

    1987-01-01

    The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.

  1. Improved Sensitivity Relations in State Constrained Optimal Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettiol, Piernicola, E-mail: piernicola.bettiol@univ-brest.fr; Frankowska, Hélène, E-mail: frankowska@math.jussieu.fr; Vinter, Richard B., E-mail: r.vinter@imperial.ac.uk

    2015-04-15

    Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjointmore » state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proof, and because it is validated for a stronger set of necessary conditions.« less

  2. DNP enhanced NMR with flip-back recovery

    NASA Astrophysics Data System (ADS)

    Björgvinsdóttir, Snædís; Walder, Brennan J.; Pinon, Arthur C.; Yarava, Jayasubba Reddy; Emsley, Lyndon

    2018-03-01

    DNP methods can provide significant sensitivity enhancements in magic angle spinning solid-state NMR, but in systems with long polarization build up times long recycling periods are required to optimize sensitivity. We show how the sensitivity of such experiments can be improved by the classic flip-back method to recover bulk proton magnetization following continuous wave proton heteronuclear decoupling. Experiments were performed on formulations with characteristic build-up times spanning two orders of magnitude: a bulk BDPA radical doped o-terphenyl glass and microcrystalline samples of theophylline, L-histidine monohydrochloride monohydrate, and salicylic acid impregnated by incipient wetness. For these systems, addition of flip-back is simple, improves the sensitivity beyond that provided by modern heteronuclear decoupling methods such as SPINAL-64, and provides optimal sensitivity at shorter recycle delays. We show how to acquire DNP enhanced 2D refocused CP-INADEQUATE spectra with flip-back recovery, and demonstrate that the flip-back recovery method is particularly useful in rapid recycling regimes. We also report Overhauser effect DNP enhancements of over 70 at 592.6 GHz/900 MHz.

  3. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  4. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  5. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  6. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  7. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  8. Piezoresistive Composite Silicon Dioxide Nanocantilever Surface Stress Sensor: Design and Optimization.

    PubMed

    Mathew, Ribu; Sankar, A Ravi

    2018-05-01

    In this paper, we present the design and optimization of a rectangular piezoresistive composite silicon dioxide nanocantilever sensor. Unlike the conventional design approach, we perform the sensor optimization by not only considering its electro-mechanical response but also incorporating the impact of self-heating induced thermal drift in its terminal characteristics. Through extensive simulations first we comprehend and quantify the inaccuracies due to self-heating effect induced by the geometrical and intrinsic parameters of the piezoresistor. Then, by optimizing the ratio of electrical sensitivity to thermal sensitivity defined as the sensitivity ratio (υ) we improve the sensor performance and measurement reliability. Results show that to ensure υ ≥ 1, shorter and wider piezoresistors are better. In addition, it is observed that unlike the general belief that high doping concentration of piezoresistor reduces thermal sensitivity in piezoresistive sensors, to ensure υ ≥ 1 doping concentration (p) should be in the range: 1E18 cm-3 ≤ p ≤ 1E19 cm-3. Finally, we provide a set of design guidelines that will help NEMS engineers to optimize the performance of such sensors for chemical and biological sensing applications.

  9. Lactation Consultant

    MedlinePlus

    ... about infant and child feeding; utilize a pragmatic problem-solving approach, sensitive to the learner’s culture, questions and concerns; provide anticipatory guidance to promote optimal breastfeeding practices ... or complications; provide positive feedback and emotional support ...

  10. Optimization of PET instrumentation for brain activation studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlbom, M.; Cherry, S.R.; Hoffman, E.J.

    By performing cerebral blood flow studies with positron emission tomography (PET), and comparing blood flow images of different states of activation, functional mapping of the brain is possible. The ability of current commercial instruments to perform such studies is investigated in this work, based on a comparison of noise equivalent count (NEC) rates. Differences in the NEC performance of the different scanners in conjunction with scanner design parameters, provide insights into the importance of block design (size, dead time, crystal thickness) and overall scanner design (sensitivity and scatter fraction) for optimizing data from activation studies. The newer scanners with removablemore » septa, operating with 3-D acquisition, have much higher sensitivity, but require new methodology for optimized operation. Only by administering multiple low doses (fractionation) of the flow tracer can the high sensitivity be utilized.« less

  11. Design of clinical trials involving multiple hypothesis tests with a common control.

    PubMed

    Schou, I Manjula; Marschner, Ian C

    2017-07-01

    Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  13. The anatomy of choice: dopamine and decision-making

    PubMed Central

    Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.

    2014-01-01

    This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses—and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making. PMID:25267823

  14. The anatomy of choice: dopamine and decision-making.

    PubMed

    Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2014-11-05

    This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.

  15. Frequency dependence of sensitivities in second-order RC active filters

    NASA Astrophysics Data System (ADS)

    Kunieda, T.; Hiramatsu, Y.; Fukui, A.

    1980-02-01

    This paper presents that gain and phase sensitivities to some element in biquadratic filters approximately constitute a circle on the complex sensitivity plane, provided that the quality factor Q of the circuit is appreciably larger than unity. Moreover, the group delay sensitivity is represented by the imaginary part of a cardioid. Using these results, bounds of maximum values of gain, phase, and group delay sensitivities are obtained. Further, it is proved that the maximum values of these sensitivities can be simultaneously minimized by minimizing the absolute value of the transfer function sensitivity at the center frequency provided that w(0)-sensitivities are constant and do not contain design parameters. Next, a statistical variability measure for the optimal-filter design is proposed. Finally, the relation between some variability measures proposed to the present time is made clear.

  16. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  17. Optimization of silver-dielectric-silver nanoshell for sensing applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirzaditabar, Farzad; Saliminasab, Maryam

    2013-08-15

    In this paper, resonance light scattering (RLS) properties of a silver-dielectric-silver nanoshell, based on quasi-static approach and plasmon hybridization theory, are investigated. Scattering spectrum of silver-dielectric-silver nanoshell has two intense and clearly separated RLS peaks and provides a potential for biosensing based on surface plasmon resonance and surface-enhanced Raman scattering. The two RLS peaks in silver-dielectric-silver nanoshell are optimized by tuning the geometrical dimensions. In addition, the optimal geometry is discussed to obtain the high sensitivity of silver-dielectric-silver nanoshell. As the silver core radius increases, the sensitivity of silver-dielectric-silver nanoshell decreases whereas increasing the middle dielectric thickness increases the sensitivitymore » of silver-dielectric-silver nanoshell.« less

  18. Porphyrin-sensitized solar cells: systematic molecular optimization, coadsorption and cosensitization.

    PubMed

    Song, Heli; Liu, Qingyun; Xie, Yongshu

    2018-02-15

    As a promising low-cost solar energy conversion technique, dye-sensitized solar cells have undergone spectacular development since 1991. For practical applications, improvement of power conversion efficiency has always been one of the major research topics. Porphyrins are outstanding sensitizers endowed with strong sunlight harvesting ability in the visible region and multiple reaction sites available for functionalization. However, judicious molecular design in consideration of light-harvest, energy levels, operational dynamics, adsorption geometry and suppression of back reactions is specifically required for achieving excellent photovoltaic performance. This feature article highlights some of the recently developed porphyrin sensitizers, especially focusing on the systematic dye structure optimization approach in combination with coadsorption and cosensitization methods in pursuing higher efficiencies. Herein, we expect to provide more insights into the structure-performance correlation and molecular engineering strategies in a stepwise manner.

  19. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  20. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  1. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  2. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  3. Behavior and sensitivity of an optimal tree diameter growth model under data uncertainty

    Treesearch

    Don C. Bragg

    2005-01-01

    Using loblolly pine, shortleaf pine, white oak, and northern red oak as examples, this paper considers the behavior of potential relative increment (PRI) models of optimal tree diameter growth under data uncertainity. Recommendations on intial sample size and the PRI iteractive curve fitting process are provided. Combining different state inventories prior to PRI model...

  4. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  5. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  6. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  7. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  8. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  9. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  10. Defense Small Business Innovation Research Program (SBIR). Volume 3. Air Force Abstracts of Phase 1 Awards

    DTIC Science & Technology

    1990-01-01

    THERE WILL BE A CONTINUING NEED FOR A SENSITIVE, RAPID, AND ECONOMICAL TESTING PROCEDURE CAPABLE OF DETECTING DEFECTS AND PROVIDING FEEDBACK FOR QUALITY...SOLUTIONS. THE DKF METHOD PROVIDES OPTIMAL OR NEAR-OPTIMAL ACCURACY, REDUCE PROCESSING BURDEN, AND IMPROVE FAULT TOLERANCE. THE DKF/MMAE ( DMAE ) TECHNIQUES...DEVICES FOR B-SiC IS TO BE ABLE TO CONSISTENTLY PRODUCE INTRINSIC FILMS WITH VERY LOW DEFECTS AND TO DEVELOP SCHOTTKY AND OHMIC CONTACT MATERIALS THAT WILL

  11. Optimal digital dynamical decoupling for general decoherence via Walsh modulation

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza

    2017-11-01

    We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.

  12. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations.

  13. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  14. Computerized optimization of radioimmunoassays for hCG and estradiol: an experimental evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanagishita, M.; Rodbard, D.

    1978-07-15

    The mathematical and statistical theory of radioimmunoassays (RIAs) has been used to develop a series of computer programs to optimize sensitivity or precision at any desired dose level for either equilibrium or nonequilibrium assays. These computer programs provide for the calculation of the equilibrium constants of association and binding capacities for antisera (parameters of Scatchard plots), the association and dissociation rate constants, and prediction of optimum concentration of labeled ligand and antibody and optimum incubation times for the assay. This paper presents an experimental evaluation of the use of these computer programs applied to RIAs for human chorionic gonadotropin (hCG)more » and estradiol. The experimental results are in reasonable semiquantitative agreement with the predictions of the computer simulations (usually within a factor of two) and thus partially validate the use of computer techniques to optimize RIAs that are reasonably well behaved, as in the case of the hCG and estradiol RIAs. Further, these programs can provide insights into the nature of the RIA system, e.g., the general nature of the sensitivity and precision surfaces. This facilitates empirical optimization of conditions.« less

  15. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  16. Modification of insulin sensitivity and glycemic control by activity and exercise.

    PubMed

    Roberts, Christian K; Little, Jonathan P; Thyfault, John P

    2013-10-01

    Type 2 diabetes has progressed into a major contributor to preventable death, and developing optimal therapeutic strategies to prevent future type 2 diabetes and its primary clinical manifestation of cardiovascular disease is a major public health challenge. This article will provide a brief overview of the role of activity and exercise in modulating insulin sensitivity and will outline the effect of physical activity, high-intensity interval training, and resistance training on insulin sensitivity and glycemic control.

  17. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  18. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  19. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    PubMed

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  20. Optimal cure cycle design for autoclave processing of thick composites laminates: A feasibility study

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.

    1985-01-01

    The thermal analysis and the calculation of thermal sensitivity of a cure cycle in autoclave processing of thick composite laminates were studied. A finite element program for the thermal analysis and design derivatives calculation for temperature distribution and the degree of cure was developed and verified. It was found that the direct differentiation was the best approach for the thermal design sensitivity analysis. In addition, the approach of the direct differentiation provided time histories of design derivatives which are of great value to the cure cycle designers. The approach of direct differentiation is to be used for further study, i.e., the optimal cycle design.

  1. Optimal Congestion Management in Electricity Market Using Particle Swarm Optimization with Time Varying Acceleration Coefficients

    NASA Astrophysics Data System (ADS)

    Boonyaritdachochai, Panida; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes an optimal power redispatching approach for congestion management in deregulated electricity market. Generator sensitivity is considered to indicate the redispatched generators. It can reduce the number of participating generators. The power adjustment cost and total redispatched power are minimized by particle swarm optimization with time varying acceleration coefficients (PSO-TVAC). The IEEE 30-bus and IEEE 118-bus systems are used to illustrate the proposed approach. Test results show that the proposed optimization scheme provides the lowest adjustment cost and redispatched power compared to the other schemes. The proposed approach is useful for the system operator to manage the transmission congestion.

  2. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  3. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  4. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  5. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  6. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  7. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  8. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  9. Verification and Optimal Control of Context-Sensitive Probabilistic Boolean Networks Using Model Checking and Polynomial Optimization

    PubMed Central

    Hiraishi, Kunihiko

    2014-01-01

    One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766

  10. Stiffness optimization of non-linear elastic structures

    DOE PAGES

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    2017-11-13

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  11. Stiffness optimization of non-linear elastic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  12. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  13. Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers

    NASA Technical Reports Server (NTRS)

    Branner, G. R.; Chan, S.-P.

    1975-01-01

    This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.

  14. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  15. An efficient method of reducing glass dispersion tolerance sensitivity

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  16. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  17. Application of characteristic time concepts for hydraulic fracture configuration design, control, and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Advani, S.H.; Lee, T.S.; Moon, H.

    1992-10-01

    The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less

  18. Application of characteristic time concepts for hydraulic fracture configuration design, control, and optimization. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Advani, S.H.; Lee, T.S.; Moon, H.

    1992-10-01

    The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less

  19. Interferometric sensitivity and entanglement by scanning through quantum phase transitions in spinor Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Feldmann, P.; Gessner, M.; Gabbrielli, M.; Klempt, C.; Santos, L.; Pezzè, L.; Smerzi, A.

    2018-03-01

    Recent experiments demonstrated the generation of entanglement by quasiadiabatically driving through quantum phase transitions of a ferromagnetic spin-1 Bose-Einstein condensate in the presence of a tunable quadratic Zeeman shift. We analyze, in terms of the Fisher information, the interferometric value of the entanglement accessible by this approach. In addition to the Twin-Fock phase studied experimentally, we unveil a second regime, in the broken axisymmetry phase, which provides Heisenberg scaling of the quantum Fisher information and can be reached on shorter time scales. We identify optimal unitary transformations and an experimentally feasible optimal measurement prescription that maximize the interferometric sensitivity. We further ascertain that the Fisher information is robust with respect to nonadiabaticity and measurement noise. Finally, we show that the quasiadiabatic entanglement preparation schemes admit higher sensitivities than dynamical methods based on fast quenches.

  20. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    DOE PAGES

    Palmiotti, Giuseppe; Salvatores, Massimo

    2012-01-01

    The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  1. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  2. Importance of optimizing chromatographic conditions and mass spectrometric parameters for supercritical fluid chromatography/mass spectrometry.

    PubMed

    Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi

    2017-07-28

    Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Optimization and Calibration of Slat Position for a SPECT With Slit-Slat Collimator and Pixelated Detector Crystals

    NASA Astrophysics Data System (ADS)

    Deng, Xiao; Ma, Tianyu; Lecomte, Roger; Yao, Rutao

    2011-10-01

    To expand the availability of SPECT for biomedical research, we developed a SPECT imaging system on an existing animal PET detector by adding a slit-slat collimator. As the detector crystals are pixelated, the relative slat-to-crystal position (SCP) in the axial direction affects the photon flux distribution onto the crystals. The accurate knowledge of SCP is important to the axial resolution and sensitivity of the system. This work presents a method for optimizing SCP in system design and for determining SCP in system geometrical calibration. The optimization was achieved by finding the SCP that provides higher spatial resolution in terms of average-root-mean-square (R̅M̅S̅) width of the axial point spread function (PSF) without loss of sensitivity. The calibration was based on the least-square-error method that minimizes the difference between the measured and modeled axial point spread projections. The uniqueness and accuracy of the calibration results were validated through a singular value decomposition (SVD) based approach. Both the optimization and calibration techniques were evaluated with Monte Carlo (MC) simulated data. We showed that the [R̅M̅S̅] was improved about 15% with the optimal SCP as compared to the least-optimal SCP, and system sensitivity was not affected by SCP. The SCP error achieved by the proposed calibration method was less than 0.04 mm. The calibrated SCP value was used in MC simulation to generate the system matrix which was used for image reconstruction. The images of simulated phantoms showed the expected resolution performance and were artifact free. We conclude that the proposed optimization and calibration method is effective for the slit-slat collimator based SPECT systems.

  4. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    PubMed

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  5. Gyroscopic sensing in the wings of the hawkmoth Manduca sexta: the role of sensor location and directional sensitivity.

    PubMed

    Hinson, Brian T; Morgansen, Kristi A

    2015-10-06

    The wings of the hawkmoth Manduca sexta are lined with mechanoreceptors called campaniform sensilla that encode wing deformations. During flight, the wings deform in response to a variety of stimuli, including inertial-elastic loads due to the wing flapping motion, aerodynamic loads, and exogenous inertial loads transmitted by disturbances. Because the wings are actuated, flexible structures, the strain-sensitive campaniform sensilla are capable of detecting inertial rotations and accelerations, allowing the wings to serve not only as a primary actuator, but also as a gyroscopic sensor for flight control. We study the gyroscopic sensing of the hawkmoth wings from a control theoretic perspective. Through the development of a low-order model of flexible wing flapping dynamics, and the use of nonlinear observability analysis, we show that the rotational acceleration inherent in wing flapping enables the wings to serve as gyroscopic sensors. We compute a measure of sensor fitness as a function of sensor location and directional sensitivity by using the simulation-based empirical observability Gramian. Our results indicate that gyroscopic information is encoded primarily through shear strain due to wing twisting, where inertial rotations cause detectable changes in pronation and supination timing and magnitude. We solve an observability-based optimal sensor placement problem to find the optimal configuration of strain sensor locations and directional sensitivities for detecting inertial rotations. The optimal sensor configuration shows parallels to the campaniform sensilla found on hawkmoth wings, with clusters of sensors near the wing root and wing tip. The optimal spatial distribution of strain directional sensitivity provides a hypothesis for how heterogeneity of campaniform sensilla may be distributed.

  6. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  7. Optimizing desalinated sea water blending with other sources to meet magnesium requirements for potable and irrigation waters.

    PubMed

    Avni, Noa; Eben-Chaime, Moshe; Oron, Gideon

    2013-05-01

    Sea water desalination provides fresh water that typically lacks minerals essential to human health and to agricultural productivity. Thus the rising proportion of desalinated sea water consumed by both the domestic and agricultural sectors constitutes a public health risk. Research on low-magnesium water irrigation showed that crops developed magnesium deficiency symptoms that could lead to plant death, and tomato yields were reduced by 10-15%. The World Health Organization (WHO) reported on a relationship between sudden cardiac death rates and magnesium intake deficits. An optimization model, developed and tested to provide recommendations for Water Distribution System (WDS) quality control in terms of meeting optimal water quality requirements, was run in computational experiments based on an actual regional WDS. The expected magnesium deficit due to the operation of a large Sea Water Desalination Plant (SWDP) was simulated, and an optimal operation policy, in which remineralization at the SWDP was combined with blending desalinated and natural water to achieve the required quality, was generated. The effects of remineralization costs and WDS physical layout on the optimal policy were examined by sensitivity analysis. As part of the sensitivity blending natural and desalinated water near the treatment plants will be feasible up to 16.2 US cents/m(3), considering all expenses. Additional chemical injection was used to meet quality criteria when blending was not feasible. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  9. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  10. Understanding and mimicking the dual optimality of the fly ear

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Currano, Luke; Gee, Danny; Helms, Tristan; Yu, Miao

    2013-08-01

    The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.

  11. Monte Carlo Optimization of Crystal Configuration for Pixelated Molecular SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Mahani, Hojjat; Raisali, Gholamreza; Kamali-Asl, Alireza; Ay, Mohammad Reza

    2017-02-01

    Resolution-sensitivity-PDA tradeoff is the most challenging problem in design and optimization of pixelated preclinical SPECT scanners. In this work, we addressed such a challenge from a crystal point-of-view by looking for an optimal pixelated scintillator using GATE Monte Carlo simulation. Various crystal configurations have been investigated and the influence of different pixel sizes, pixel gaps, and three scintillators on tomographic resolution, sensitivity, and PDA of the camera were evaluated. The crystal configuration was then optimized using two objective functions: the weighted-sum and the figure-of-merit methods. The CsI(Na) reveals the highest sensitivity of the order of 43.47 cps/MBq in comparison to the NaI(Tl) and the YAP(Ce), for a 1.5×1.5 mm2 pixel size and 0.1 mm gap. The results show that the spatial resolution, in terms of FWHM, improves from 3.38 to 2.21 mm while the sensitivity simultaneously deteriorates from 42.39 cps/MBq to 27.81 cps/MBq when pixel size varies from 2×2 mm2 to 0.5×0.5 mm2 for a 0.2 mm gap, respectively. The PDA worsens from 0.91 to 0.42 when pixel size decreases from 0.5×0.5 mm2 to 1×1 mm2 for a 0.2 mm gap at 15° incident-angle. The two objective functions agree that the 1.5×1.5 mm2 pixel size and 0.1 mm Epoxy gap CsI(Na) configuration provides the best compromise for small-animal imaging, using the HiReSPECT scanner. Our study highlights that crystal configuration can significantly affect the performance of the camera, and thereby Monte Carlo optimization of pixelated detectors is mandatory in order to achieve an optimal quality tomogram.

  12. Influence of optimized leading-edge deflection and geometric anhedral on the low-speed aerodynamic characteristics of a low-aspect-ratio highly swept arrow-wing configuration. [langley 7 by 10 foot tunnel

    NASA Technical Reports Server (NTRS)

    Coe, P. L., Jr.; Huffman, J. K.

    1979-01-01

    An investigation conducted in the Langley 7 by 10 foot tunnel to determine the influence of an optimized leading-edge deflection on the low speed aerodynamic performance of a configuration with a low aspect ratio, highly swept wing. The sensitivity of the lateral stability derivative to geometric anhedral was also studied. The optimized leading edge deflection was developed by aligning the leading edge with the incoming flow along the entire span. Owing to spanwise variation of unwash, the resulting optimized leading edge was a smooth, continuously warped surface for which the deflection varied from 16 deg at the side of body to 50 deg at the wing tip. For the particular configuration studied, levels of leading-edge suction on the order of 90 percent were achieved. The results of tests conducted to determine the sensitivity of the lateral stability derivative to geometric anhedral indicate values which are in reasonable agreement with estimates provided by simple vortex-lattice theories.

  13. Optimizing the interpretation of CT for appendicitis: modeling health utilities for clinical practice.

    PubMed

    Blackmore, C Craig; Terasawa, Teruhiko

    2006-02-01

    Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.

  14. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  15. Applications of Sharp Interface Method for Flow Dynamics, Scattering and Control Problems

    DTIC Science & Technology

    2012-07-30

    Reynolds number, Advances in Applied Mathematics and Mechanics, to appear. 17. K. Ito and K. Kunisch, Optimal Control of Parabolic Variational ...provides more precise and detailed sensitivity of the solution and describes the dynamical change due to the variation in the Reynolds number. The immersed... Inequalities , Journal de Math. Pures et Appl, 93 (2010), no. 4, 329-360. 18. K. Ito and K. Kunisch, Semi-smooth Newton Methods for Time-Optimal Control for a

  16. Assessment of regional management strategies for controlling seawater intrusion

    USGS Publications Warehouse

    Reichard, E.G.; Johnson, T.A.

    2005-01-01

    Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.

  17. Loss-resistant unambiguous phase measurement

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2014-08-01

    Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.

  18. Mechanical design optimization of a single-axis MOEMS accelerometer based on a grating interferometry cavity for ultrahigh sensitivity

    NASA Astrophysics Data System (ADS)

    Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan; Yang, Guoguang

    2016-08-01

    The ultrahigh static displacement-acceleration sensitivity of a mechanical sensing chip is essential primarily for an ultrasensitive accelerometer. In this paper, an optimal design to implement to a single-axis MOEMS accelerometer consisting of a grating interferometry cavity and a micromachined sensing chip is presented. The micromachined sensing chip is composed of a proof mass along with its mechanical cantilever suspension and substrate. The dimensional parameters of the sensing chip, including the length, width, thickness and position of the cantilevers are evaluated and optimized both analytically and by finite-element-method (FEM) simulation to yield an unprecedented acceleration-displacement sensitivity. Compared with one of the most sensitive single-axis MOEMS accelerometers reported in the literature, the optimal mechanical design can yield a profound sensitivity improvement with an equal footprint area, specifically, 200% improvement in displacement-acceleration sensitivity with moderate resonant frequency and dynamic range. The modified design was microfabricated, packaged with the grating interferometry cavity and tested. The experimental results demonstrate that the MOEMS accelerometer with modified design can achieve the acceleration-displacement sensitivity of about 150μm/g and acceleration sensitivity of greater than 1500V/g, which validates the effectiveness of the optimal design.

  19. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  20. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  1. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  2. Optimization for minimum sensitivity to uncertain parameters

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw

    1994-01-01

    A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.

  3. On simple aerodynamic sensitivity derivatives for use in interdisciplinary optimization

    NASA Technical Reports Server (NTRS)

    Doggett, Robert V., Jr.

    1991-01-01

    Low-aspect-ratio and piston aerodynamic theories are reviewed as to their use in developing aerodynamic sensitivity derivatives for use in multidisciplinary optimization applications. The basic equations relating surface pressure (or lift and moment) to normal wash are given and discussed briefly for each theory. The general means for determining selected sensitivity derivatives are pointed out. In addition, some suggestions in very general terms are included as to sample problems for use in studying the process of using aerodynamic sensitivity derivatives in optimization studies.

  4. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  5. Development of a highly sensitive three-dimensional gel electrophoresis method for characterization of monoclonal protein heterogeneity.

    PubMed

    Nakano, Keiichi; Tamura, Shogo; Otuka, Kohei; Niizeki, Noriyasu; Shigemura, Masahiko; Shimizu, Chikara; Matsuno, Kazuhiko; Kobayashi, Seiichi; Moriyama, Takanori

    2013-07-15

    Three-dimensional gel electrophoresis (3-DE), which combines agarose gel electrophoresis and isoelectric focusing/SDS-PAGE, was developed to characterize monoclonal proteins (M-proteins). However, the original 3-DE method has not been optimized and its specificity has not been demonstrated. The main goal of this study was to optimize the 3-DE procedure and then compare it with 2-DE. We developed a highly sensitive 3-DE method in which M-proteins are extracted from a first-dimension agarose gel, by diffusing into 150 mM NaCl, and the recovery of M-proteins was 90.6%. To validate the utility of the highly sensitive 3-DE, we compared it with the original 3-DE method. We found that highly sensitive 3-DE provided for greater M-protein recovery and was more effective in terms of detecting spots on SDS-PAGE gels than the original 3-DE. Moreover, highly sensitive 3-DE separates residual normal IgG from M-proteins, which could not be done by 2-DE. Applying the highly sensitive 3-DE to clinical samples, we found that the characteristics of M-proteins vary tremendously between individuals. We believe that our highly sensitive 3-DE method described here will prove useful in further studies of the heterogeneity of M-proteins. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Optimizing signal recycling for detecting a stochastic gravitational-wave background

    NASA Astrophysics Data System (ADS)

    Tao, Duo; Christensen, Nelson

    2018-06-01

    Signal recycling is applied in laser interferometers such as the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) to increase their sensitivity to gravitational waves. In this study, signal recycling configurations for detecting a stochastic gravitational wave background are optimized based on aLIGO parameters. Optimal transmission of the signal recycling mirror (SRM) and detuning phase of the signal recycling cavity under a fixed laser power and low-frequency cutoff are calculated. Based on the optimal configurations, the compatibility with a binary neutron star (BNS) search is discussed. Then, different laser powers and low-frequency cutoffs are considered. Two models for the dimensionless energy density of gravitational waves , the flat model and the model, are studied. For a stochastic background search, it is found that an interferometer using signal recycling has a better sensitivity than an interferometer not using it. The optimal stochastic search configurations are typically found when both the SRM transmission and the signal recycling detuning phase are low. In this region, the BNS range mostly lies between 160 and 180 Mpc. When a lower laser power is used the optimal signal recycling detuning phase increases, the optimal SRM transmission increases and the optimal sensitivity improves. A reduced low-frequency cutoff gives a better sensitivity limit. For both models of , a typical optimal sensitivity limit on the order of 10‑10 is achieved at a reference frequency of Hz.

  7. Optimizing Ionic Electrolytes for Dye-Sensitized Solar Cells

    NASA Astrophysics Data System (ADS)

    Fan, Xiaojuan; Hall, Sarah

    2009-03-01

    Dye-sensitized solar cells DSSCs provide next generation, low cost, and easy fabrication photovoltaic devices based on organic sensitizing molecules, polymer gel electrolyte, and metal oxide semiconductors. One of the key components is the solvent-free ionic liquid electrolyte that has low volatility and high stability. We report a rapid and low cost method to fabricate ionic polymer electrolyte used in DSSCs. Poly(ethylene oxide) (PEO) is blended with imidazolinium salt without any chemical solvent to form a gel electrolyte. Uniform and crack-free porous TiO2 thin films are sensitized by porphrine dye covered by the synthesized gel electrolyte. The fabricated DSSCs are more stable and potentially increase the photo-electricity conversion efficiency.

  8. Pressure- and Temperature-Sensitive Paint at 0.3-m Transonic Cryogenic Tunnel

    NASA Technical Reports Server (NTRS)

    Watkins, A. Neal; Leighty, Bradley D.; Lipford, William E.; Goodman, Kyle Z.

    2015-01-01

    Recently both Pressure- and Temperature-Sensitive Paint experiments were conducted at cryogenic conditions in the 0.3-m Transonic Cryogenic Tunnel at NASA Langley Research Center. This represented a re-introduction of the techniques to the facility after more than a decade, and provided a means to upgrade the measurements using newer technology as well as demonstrate that the techniques were still viable in the facility. Temperature-Sensitive Paint was employed on a laminar airfoil for transition detection and Pressure-Sensitive Paint was employed on a supercritical airfoil. This report will detail the techniques and their unique challenges that need to be overcome in cryogenic environments. In addition, several optimization strategies will also be discussed.

  9. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  10. Development of the Advanced Energetic Pair Telescope (AdEPT) for Medium-Energy Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; Bloser, Peter F.; Dion, Michael P.; McConnell, Mark L.; deNolfo, Georgia A.; Son, Seunghee; Ryan, James M.; Stecker, Floyd W.

    2011-01-01

    Progress in high-energy gamma-ray science has been dramatic since the launch of INTEGRAL, AGILE and FERMI. These instruments, however, are not optimized for observations in the medium-energy (approx.0.3< E(sub gamma)< approx.200 MeV) regime where many astrophysical objects exhibit unique, transitory behavior, such as spectral breaks, bursts, and flares. We outline some of the major science goals of a medium-energy mission. These science goals are best achieved with a combination of two telescopes, a Compton telescope and a pair telescope, optimized to provide significant improvements in angular resolution and sensitivity. In this paper we describe the design of the Advanced Energetic Pair Telescope (AdEPT) based on the Three-Dimensional Track Imager (3-DTI) detector. This technology achieves excellent, medium-energy sensitivity, angular resolution near the kinematic limit, and gamma-ray polarization sensitivity, by high resolution 3-D electron tracking. We describe the performance of a 30x30x30 cm3 prototype of the AdEPT instrument.

  11. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  12. Sensitive Adsorptive Voltammetric Method for Determination of Bisphenol A by Gold Nanoparticle/Polyvinylpyrrolidone-Modified Pencil Graphite Electrode

    PubMed Central

    Yaman, Yesim Tugce; Abaci, Serdar

    2016-01-01

    A novel electrochemical sensor gold nanoparticle (AuNP)/polyvinylpyrrolidone (PVP) modified pencil graphite electrode (PGE) was developed for the ultrasensitive determination of Bisphenol A (BPA). The gold nanoparticles were electrodeposited by constant potential electrolysis and PVP was attached by passive adsorption onto the electrode surface. The electrode surfaces were characterized by electrochemical impedance spectroscopy (EIS) and scanning electron microscopy (SEM). The parameters that affected the experimental conditions were researched and optimized. The AuNP/PVP/PGE sensor provided high sensitivity and selectivity for BPA recognition by using square wave adsorptive stripping voltammetry (SWAdSV). Under optimized conditions, the detection limit was found to be 1.0 nM. This new sensor system offered the advantages of simple fabrication which aided the expeditious replication, low cost, fast response, high sensitivity and low background current for BPA. This new sensor system was successfully tested for the detection of the amount of BPA in bottled drinking water with high reliability. PMID:27231912

  13. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  14. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  15. The anatomy of choice: active inference and agency.

    PubMed

    Friston, Karl; Schwartenbeck, Philipp; Fitzgerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.

  16. Two retailer-supplier supply chain models with default risk under trade credit policy.

    PubMed

    Wu, Chengfeng; Zhao, Qiuhong

    2016-01-01

    The purpose of the paper is to formulate two uncooperative replenishment models with demand and default risk which are the functions of the trade credit period, i.e., a Nash equilibrium model and a supplier-Stackelberg model. Firstly, we present the optimal results of decentralized decision and centralized decision without trade credit. Secondly, we derive the existence and uniqueness conditions of the optimal solutions under the two games, respectively. Moreover, we present a set of theorems and corollary to determine the optimal solutions. Finally, we provide an example and sensitivity analysis to illustrate the proposed strategy and optimal solutions. Sensitivity analysis reveals that the total profits of supply chain under the two games both are better than the results under the centralized decision only if the optimal trade credit period isn't too short. It also reveals that the size of trade credit period, demand, retailer's profit and supplier's profit have strong relationship with the increasing demand coefficient, wholesale price, default risk coefficient and production cost. The major contribution of the paper is that we comprehensively compare between the results of decentralized decision and centralized decision without trade credit, Nash equilibrium and supplier-Stackelberg models with trade credit, and obtain some interesting managerial insights and practical implications.

  17. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  18. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    PubMed

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.

    PubMed

    Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric

    2018-03-01

    Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.

  20. Self-tuning bistable parametric feedback oscillator: Near-optimal amplitude maximization without model information

    NASA Astrophysics Data System (ADS)

    Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu

    2017-01-01

    Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.

  1. Increased bioassay sensitivity of bioactive molecule discovery using metal-enhanced bioluminescence

    NASA Astrophysics Data System (ADS)

    Golberg, Karina; Elbaz, Amit; McNeil, Ronald; Kushmaro, Ariel; Geddes, Chris D.; Marks, Robert S.

    2014-12-01

    We report the use of bioluminescence signal enhancement via proximity to deposited silver nanoparticles for bioactive compound discovery. This approach employs a whole-cell bioreporter harboring a plasmid-borne fusion of a specific promoter incorporated with a bioluminescence reporter gene. The silver deposition process was first optimized to provide optimal nanoparticle size in the reaction time dependence with fluorescein. The use of silver deposition of 350 nm particles enabled the doubling of the bioluminescent signal amplitude by the bacterial bioreporter when compared to an untouched non-silver-deposited microtiter plate surface. This recording is carried out in the less optimal but necessary far-field distance. SEM micrographs provided a visualization of the proximity of the bioreporter to the silver nanoparticles. The electromagnetic field distributions around the nanoparticles were simulated using Finite Difference Time Domain, further suggesting a re-excitation of non-chemically excited bioluminescence in addition to metal-enhanced bioluminescence. The possibility of an antiseptic silver effect caused by such a close proximity was eliminated disregarded by the dynamic growth curves of the bioreporter strains as seen using viability staining. As a highly attractive biotechnology tool, this silver deposition technique, coupled with whole-cell sensing, enables increased bioluminescence sensitivity, making it especially useful for cases in which reporter luminescence signals are very weak.

  2. Instructional versus schedule control of humans' choices in situations of diminishing returns

    PubMed Central

    Hackenberg, Timothy D.; Joker, Veronica R.

    1994-01-01

    Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors. PMID:16812747

  3. Near-infrared voltage-sensitive fluorescent dyes optimized for optical mapping in blood-perfused myocardium.

    PubMed

    Matiukas, Arvydas; Mitrea, Bogdan G; Qin, Maochun; Pertsov, Arkady M; Shvedko, Alexander G; Warren, Mark D; Zaitsev, Alexey V; Wuskell, Joseph P; Wei, Mei-de; Watras, James; Loew, Leslie M

    2007-11-01

    Styryl voltage-sensitive dyes (e.g., di-4-ANEPPS) have been used successfully for optical mapping in cardiac cells and tissues. However, their utility for probing electrical activity deep inside the myocardial wall and in blood-perfused myocardium has been limited because of light scattering and high absorption by endogenous chromophores and hemoglobin at blue-green excitation wavelengths. The purpose of this study was to characterize two new styryl dyes--di-4-ANBDQPQ (JPW-6003) and di-4-ANBDQBS (JPW-6033)--optimized for blood-perfused tissue and intramural optical mapping. Voltage-dependent spectra were recorded in a model lipid bilayer. Optical mapping experiments were conducted in four species (mouse, rat, guinea pig, and pig). Hearts were Langendorff perfused using Tyrode's solution and blood (pig). Dyes were loaded via bolus injection into perfusate. Transillumination experiments were conducted in isolated coronary-perfused pig right ventricular wall preparations. The optimal excitation wavelength in cardiac tissues (650 nm) was >70 nm beyond the absorption maximum of hemoglobin. Voltage sensitivity of both dyes was approximately 10% to 20%. Signal decay half-life due to dye internalization was 80 to 210 minutes, which is 5 to 7 times slower than for di-4-ANEPPS. In transillumination mode, DeltaF/F was as high as 20%. In blood-perfused tissues, DeltaF/F reached 5.5% (1.8 times higher than for di-4-ANEPPS). We have synthesized and characterized two new near-infrared dyes with excitation/emission wavelengths shifted >100 nm to the red. They provide both high voltage sensitivity and 5 to 7 times slower internalization rate compared to conventional dyes. The dyes are optimized for deeper tissue probing and optical mapping of blood-perfused tissue, but they also can be used for conventional applications.

  4. The value of compressed air energy storage in energy and reserve markets

    DOE PAGES

    Drury, Easan; Denholm, Paul; Sioshansi, Ramteen

    2011-06-28

    Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less

  5. Anionic pH-Sensitive Lipoplexes.

    PubMed

    Mignet, Nathalie; Scherman, Daniel

    2017-01-01

    To provide long circulating nanoparticles able to carry a gene to tumors, we have designed anionic pegylated lipoplexes which are pH sensitive. Anionic pegylated lipoplexes have been prepared from the combined formulation of cationic lipoplexes and pegylated anionic liposomes. The neutralization of the particle surface charge as a function of the pH was monitored by light scattering in order to determine the ratio between anionic and cationic lipids that would give pH sensitive complexes. This ratio has been optimized to form particles sensitive to pH change in the range 5.5-6.5. Compaction of DNA into these newly formed anionic complexes is checked by DNA accessibility to picogreen. The transfection efficiency and pH sensitive property of these formulations has been shown in vitro using bafilomycin, a vacuolar H + -ATPase inhibitor.

  6. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  7. Optimal policy for profit maximising in an EOQ model under non-linear holding cost and stock-dependent demand rate

    NASA Astrophysics Data System (ADS)

    Pando, V.; García-Laguna, J.; San-José, L. A.

    2012-11-01

    In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.

  8. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  9. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    PubMed

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  10. Optimal cutoff points for HOMA-IR and QUICKI in the diagnosis of metabolic syndrome and non-alcoholic fatty liver disease: A population based study.

    PubMed

    Motamed, Nima; Miresmail, Seyed Javad Haji; Rabiee, Behnam; Keyvani, Hossein; Farahani, Behzad; Maadi, Mansooreh; Zamani, Farhad

    2016-03-01

    The present study was carried out to determine the optimal cutoff points for homeostatic model assessment (HOMA-IR) and quantitative insulin sensitivity check index (QUICKI) in the diagnosis of metabolic syndrome (MetS) and non-alcoholic fatty liver disease (NAFLD). The baseline data of 5511 subjects aged ≥18years of a cohort study in northern Iran were utilized to analyze. Receiver operating characteristic (ROC) analysis was conducted to determine the discriminatory capability of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. Youden index was utilized to determine the optimal cutoff points of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. The optimal cutoff points for HOMA-IR in the diagnosis of MetS and NAFLD were 2.0 [sensitivity=64.4%, specificity=66.8%] and 1.79 [sensitivity=66.2%, specificity=62.2%] in men and were 2.5 [sensitivity=57.6%, specificity=67.9%] and 1.95 [sensitivity=65.1%, specificity=54.7%] in women respectively. Furthermore, the optimal cutoff points for QUICKI in the diagnosis of MetS and NAFLD were 0.343 [sensitivity=63.7%, specificity=67.8%] and 0.347 [sensitivity=62.9%, specificity=65.0%] in men and were 0.331 [sensitivity=55.7%, specificity=70.7%] and 0.333 [sensitivity=53.2%, specificity=67.7%] in women respectively. Not only the optimal cutoff points of HOMA-IR and QUICKI were different for MetS and NAFLD, but also different cutoff points were obtained for men and women for each of these two conditions. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Optimization of flow-sensitive alternating inversion recovery (FAIR) for perfusion functional MRI of rodent brain.

    PubMed

    Nasrallah, Fatima A; Lee, Eugene L Q; Chuang, Kai-Hsiang

    2012-11-01

    Arterial spin labeling (ASL) MRI provides a noninvasive method to image perfusion, and has been applied to map neural activation in the brain. Although pulsed labeling methods have been widely used in humans, continuous ASL with a dedicated neck labeling coil is still the preferred method in rodent brain functional MRI (fMRI) to maximize the sensitivity and allow multislice acquisition. However, the additional hardware is not readily available and hence its application is limited. In this study, flow-sensitive alternating inversion recovery (FAIR) pulsed ASL was optimized for fMRI of rat brain. A practical challenge of FAIR is the suboptimal global inversion by the transmit coil of limited dimensions, which results in low effective labeling. By using a large volume transmit coil and proper positioning to optimize the body coverage, the perfusion signal was increased by 38.3% compared with positioning the brain at the isocenter. An additional 53.3% gain in signal was achieved using optimized repetition and inversion times compared with a long TR. Under electrical stimulation to the forepaws, a perfusion activation signal change of 63.7 ± 6.3% can be reliably detected in the primary somatosensory cortices using single slice or multislice echo planar imaging at 9.4 T. This demonstrates the potential of using pulsed ASL for multislice perfusion fMRI in functional and pharmacological applications in rat brain. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Field-based optimal-design of an electric motor: a new sensitivity formulation

    NASA Astrophysics Data System (ADS)

    Barba, Paolo Di; Mognaschi, Maria Evelina; Lowther, David Alister; Wiak, Sławomir

    2017-12-01

    In this paper, a new approach to robust optimal design is proposed. The idea is to consider the sensitivity by means of two auxiliary criteria A and D, related to the magnitude and isotropy of the sensitivity, respectively. The optimal design of a switched-reluctance motor is considered as a case study: since the case study exhibits two design criteria, the relevant Pareto front is approximated by means of evolutionary computing.

  13. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Sensory Optimization by Stochastic Tuning

    PubMed Central

    Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees

    2013-01-01

    Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PMID:24219849

  15. Design and Analysis of Optimal Ascent Trajectories for Stratospheric Airships

    NASA Astrophysics Data System (ADS)

    Mueller, Joseph Bernard

    Stratospheric airships are lighter-than-air vehicles that have the potential to provide a long-duration airborne presence at altitudes of 18-22 km. Designed to operate on solar power in the calm portion of the lower stratosphere and above all regulated air traffic and cloud cover, these vehicles represent an emerging platform that resides between conventional aircraft and satellites. A particular challenge for airship operation is the planning of ascent trajectories, as the slow moving vehicle must traverse the high wind region of the jet stream. Due to large changes in wind speed and direction across altitude and the susceptibility of airship motion to wind, the trajectory must be carefully planned, preferably optimized, in order to ensure that the desired station be reached within acceptable performance bounds of flight time and energy consumption. This thesis develops optimal ascent trajectories for stratospheric airships, examines the structure and sensitivity of these solutions, and presents a strategy for onboard guidance. Optimal ascent trajectories are developed that utilize wind energy to achieve minimum-time and minimum-energy flights. The airship is represented by a three-dimensional point mass model, and the equations of motion include aerodynamic lift and drag, vectored thrust, added mass effects, and accelerations due to mass flow rate, wind rates, and Earth rotation. A representative wind profile is developed based on historical meteorological data and measurements. Trajectory optimization is performed by first defining an optimal control problem with both terminal and path constraints, then using direct transcription to develop an approximate nonlinear parameter optimization problem of finite dimension. Optimal ascent trajectories are determined using SNOPT for a variety of upwind, downwind, and crosswind launch locations. Results of extensive optimization solutions illustrate definitive patterns in the ascent path for minimum time flights across varying launch locations, and show that significant energy savings can be realized with minimum-energy flights, compared to minimum-time time flights, given small increases in flight time. The performance of the optimal trajectories are then studied with respect to solar energy production during ascent, as well as sensitivity of the solutions to small changes in drag coefficient and wind model parameters. Results of solar power model simulations indicate that solar energy is sufficient to power ascent flights, but that significant energy loss can occur for certain types of trajectories. Sensitivity to the drag and wind model is approximated through numerical simulations, showing that optimal solutions change gradually with respect to changing wind and drag parameters and providing deeper insight into the characteristics of optimal airship flights. Finally, alternative methods are developed to generate near-optimal ascent trajectories in a manner suitable for onboard implementation. The structures and characteristics of previously developed minimum-time and minimum-energy ascent trajectories are used to construct simplified trajectory models, which are efficiently solved in a smaller numerical optimization problem. Comparison of these alternative solutions to the original SNOPT solutions show excellent agreement, suggesting the alternate formulations are an effective means to develop near-optimal solutions in an onboard setting.

  16. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    NASA Astrophysics Data System (ADS)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  17. Highly anisotropic black phosphorous-graphene hybrid architecture for ultrassensitive plasmonic biosensing: Theoretical insight

    NASA Astrophysics Data System (ADS)

    Yuan, Yufeng; Yu, Xiantong; Ouyang, Qingling; Shao, Yonghong; Song, Jun; Qu, Junle; Yong, Ken-Tye

    2018-04-01

    This study proposed a novel highly anisotropic surface plasmon resonance (SPR) biosensor employing emerging 2D black phosphorus (BP) and graphene atomic layers. Light absorption and energy loss were well balanced by optimizing gold film thickness and number of BP layers to generate the strongest SPR excitation. The proposed SPR biosensor was designed by the phase-modulation approach and is more sensitive to biomolecule bindings, providing 3 orders of magnitude higher sensitivity than the red-shift in SPR angle. Our results show the optimized configuration was 48 nm Au film coated with 4-layer BP crystal to produce the sharpest phase variation (up to 89.8975°), and lowest minimum reflectivity (1.9119  ×  10-7). Detection sensitivity up to 7.4914  ×  104 degree/refractive index unit is almost 4.5 times enhanced compared to monolayer graphene-based SPR sensors with 48 nm Au film. The anisotropic BP layers act as a polarizer, so the proposed SPR biosensor would exhibit optically tunable detection sensitivity, making it a promising candidate for exploring highly anisotropic platforms in biosensing.

  18. Biased and less sensitive: A gamified approach to delay discounting in heroin addiction.

    PubMed

    Scherbaum, Stefan; Haber, Paul; Morley, Kirsten; Underhill, Dylan; Moustafa, Ahmed A

    2018-03-01

    People with addiction will continue to use drugs despite adverse long-term consequences. We hypothesized (a) that this deficit persists during substitution treatment, and (b) that this deficit might be related not only to a desire for immediate gratification, but also to a lower sensitivity for optimal decision making. We investigated how individuals with a history of heroin addiction perform (compared to healthy controls) in a virtual reality delay discounting task. This novel task adds to established measures of delay discounting an assessment of the optimality of decisions, especially in how far decisions are influenced by a general choice bias and/or a reduced sensitivity to the relative value of the two alternative rewards. We used this measure of optimality to apply diffusion model analysis to the behavioral data to analyze the interaction between decision optimality and reaction time. The addiction group consisted of 25 patients with a history of heroin dependency currently participating in a methadone maintenance program; the control group consisted of 25 healthy participants with no history of substance abuse, who were recruited from the Western Sydney community. The patient group demonstrated greater levels of delay discounting compared to the control group, which is broadly in line with previous observations. Diffusion model analysis yielded a reduced sensitivity for the optimality of a decision in the patient group compared to the control group. This reduced sensitivity was reflected in lower rates of information accumulation and higher decision criteria. Increased discounting in individuals with heroin addiction is related not only to a generally increased bias to immediate gratification, but also to reduced sensitivity for the optimality of a decision. This finding is in line with other findings about the sensitivity of addicts in distinguishing optimal from nonoptimal choice options.

  19. An ICA-based method for the identification of optimal FMRI features and components using combined group-discriminative techniques

    PubMed Central

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.

    2013-01-01

    Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398

  20. Local lymph node assay (LLNA) for detection of sensitization capacity of chemicals.

    PubMed

    Gerberick, G Frank; Ryan, Cindy A; Dearman, Rebecca J; Kimber, Ian

    2007-01-01

    The local lymph node assay (LLNA) is a murine model developed to evaluate the skin sensitization potential of chemicals. The LLNA is an alternative approach to traditional guinea pig methods and in comparison provides important animal welfare benefits. The assay relies on measurement of events induced during the induction phase of skin sensitization, specifically lymphocyte proliferation in the draining lymph nodes which is a hallmark of a skin sensitization response. Since its introduction the LLNA has been the subject of extensive evaluation on a national and international scale, and has been successfully validated and incorporated worldwide into regulatory guidelines. Experience gained in recent years has demonstrated that adherence to published procedures and guidelines for the LLNA (e.g., with respect to dose and vehicle selection) is critical for the successful conduct and eventual interpretation of the data. In addition to providing a robust method for skin sensitization hazard identification, the LLNA has proven very useful in assessing the skin sensitizing potency of test chemicals, and this has provided invaluable information to risk assessors. The primary method to make comparisons of the relative potency of chemical sensitizers is to use linear interpolation to estimate the concentration of chemical required to induce a stimulation index of three relative to concurrent vehicle-treated controls (EC3). In certain situations where there are available less than optimal dose response data a log-linear extrapolation method can be used to estimate an EC3 value which can reduce significantly the need for repeat testing of chemicals. The LLNA, when conducted according to published guidelines, provides a robust method for skin sensitization testing that not only provides reliable hazard identification information but also data necessary for effective risk assessment and risk management.

  1. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  2. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  3. Instrument for Real-Time Digital Nucleic Acid Amplification on Custom Microfluidic Devices

    PubMed Central

    Selck, David A.

    2016-01-01

    Nucleic acid amplification tests that are coupled with a digital readout enable the absolute quantification of single molecules, even at ultralow concentrations. Digital methods are robust, versatile and compatible with many amplification chemistries including isothermal amplification, making them particularly invaluable to assays that require sensitive detection, such as the quantification of viral load in occult infections or detection of sparse amounts of DNA from forensic samples. A number of microfluidic platforms are being developed for carrying out digital amplification. However, the mechanistic investigation and optimization of digital assays has been limited by the lack of real-time kinetic information about which factors affect the digital efficiency and analytical sensitivity of a reaction. Commercially available instruments that are capable of tracking digital reactions in real-time are restricted to only a small number of device types and sample-preparation strategies. Thus, most researchers who wish to develop, study, or optimize digital assays rely on the rate of the amplification reaction when performed in a bulk experiment, which is now recognized as an unreliable predictor of digital efficiency. To expand our ability to study how digital reactions proceed in real-time and enable us to optimize both the digital efficiency and analytical sensitivity of digital assays, we built a custom large-format digital real-time amplification instrument that can accommodate a wide variety of devices, amplification chemistries and sample-handling conditions. Herein, we validate this instrument, we provide detailed schematics that will enable others to build their own custom instruments, and we include a complete custom software suite to collect and analyze the data retrieved from the instrument. We believe assay optimizations enabled by this instrument will improve the current limits of nucleic acid detection and quantification, improving our fundamental understanding of single-molecule reactions and providing advancements in practical applications such as medical diagnostics, forensics and environmental sampling. PMID:27760148

  4. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  5. Parameter extraction and transistor models

    NASA Technical Reports Server (NTRS)

    Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI

    1985-01-01

    Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.

  6. Optimization of Training Sets For Neural-Net Processing of Characteristic Patterns From Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J. (Inventor)

    2006-01-01

    An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.

  7. Optimization of on-line hydrogen stable isotope ratio measurements of halogen- and sulfur-bearing organic compounds using elemental analyzer–chromium/high-temperature conversion isotope ratio mass spectrometry (EA-Cr/HTC-IRMS)

    USGS Publications Warehouse

    Gehre, Matthias; Renpenning, Julian; Geilmann, Heike; Qi, Haiping; Coplen, Tyler B.; Kümmel, Steffen; Ivdra, Natalija; Brand, Willi A.; Schimmelmann, Arndt

    2017-01-01

    Conclusions: The optimized EA-Cr/HTC reactor design can be implemented in existing analytical equipment using commercially available material and is universally applicable for both heteroelement-bearing and heteroelement-free organic-compound classes. The sensitivity and simplicity of the on-line EA-Cr/HTC-IRMS technique provide a much needed tool for routine hydrogen-isotope source tracing of organic contaminants in the environment. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Order of Magnitude Sensitivity Increase in X-ray Fluorescence Computed Tomography (XFCT) Imaging With an Optimized Spectro-Spatial Detector Configuration: Theory and Simulation

    PubMed Central

    Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong

    2014-01-01

    The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110° to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 µg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging. PMID:24770916

  9. Order of magnitude sensitivity increase in X-ray Fluorescence Computed Tomography (XFCT) imaging with an optimized spectro-spatial detector configuration: theory and simulation.

    PubMed

    Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong; Xing, Lei

    2014-05-01

    The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110(°) to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 μg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging.

  10. Study on Web-Based Tool for Regional Agriculture Industry Structure Optimization Using Ajax

    NASA Astrophysics Data System (ADS)

    Huang, Xiaodong; Zhu, Yeping

    According to the research status of regional agriculture industry structure adjustment information system and the current development of information technology, this paper takes web-based regional agriculture industry structure optimization tool as research target. This paper introduces Ajax technology and related application frameworks to build an auxiliary toolkit of decision support system for agricultural policy maker and economy researcher. The toolkit includes a “one page” style component of regional agriculture industry structure optimization which provides agile arguments setting method that enables applying sensitivity analysis and usage of data and comparative advantage analysis result, and a component that can solve the linear programming model and its dual problem by simplex method.

  11. Sensitivity and Specificity of the Coma Recovery Scale--Revised Total Score in Detection of Conscious Awareness.

    PubMed

    Bodien, Yelena G; Carlowicz, Cecilia A; Chatelle, Camille; Giacino, Joseph T

    2016-03-01

    To describe the sensitivity and specificity of Coma Recovery Scale-Revised (CRS-R) total scores in detecting conscious awareness. Data were retrospectively extracted from the medical records of patients enrolled in a specialized disorders of consciousness (DOC) program. Sensitivity and specificity analyses were completed using CRS-R-derived diagnoses of minimally conscious state (MCS) or emerged from minimally conscious state (EMCS) as the reference standard for conscious awareness and the total CRS-R score as the test criterion. A receiver operating characteristic curve was constructed to demonstrate the optimal CRS-R total cutoff score for maximizing sensitivity and specificity. Specialized DOC program. Patients enrolled in the DOC program (N=252, 157 men; mean age, 49y; mean time from injury, 48d; traumatic etiology, n=127; nontraumatic etiology, n=125; diagnosis of coma or vegetative state, n=70; diagnosis of MCS or EMCS, n=182). Not applicable. Sensitivity and specificity of CRS-R total scores in detecting conscious awareness. A CRS-R total score of 10 or higher yielded a sensitivity of .78 for correct identification of patients in MCS or EMCS, and a specificity of 1.00 for correct identification of patients who did not meet criteria for either of these diagnoses (ie, were diagnosed with vegetative state or coma). The area under the curve in the receiver operating characteristic curve analysis is .98. A total CRS-R score of 10 or higher provides strong evidence of conscious awareness but resulted in a false-negative diagnostic error in 22% of patients who demonstrated conscious awareness based on CRS-R diagnostic criteria. A cutoff score of 8 provides the best balance between sensitivity and specificity, accurately classifying 93% of cases. The optimal total score cutoff will vary depending on the user's objective. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  12. Stochastic optimization of intensity modulated radiotherapy to account for uncertainties in patient sensitivity

    NASA Astrophysics Data System (ADS)

    Kåver, Gereon; Lind, Bengt K.; Löf, Johan; Liander, Anders; Brahme, Anders

    1999-12-01

    The aim of the present work is to better account for the known uncertainties in radiobiological response parameters when optimizing radiation therapy. The radiation sensitivity of a specific patient is usually unknown beyond the expectation value and possibly the standard deviation that may be derived from studies on groups of patients. Instead of trying to find the treatment with the highest possible probability of a desirable outcome for a patient of average sensitivity, it is more desirable to maximize the expectation value of the probability for the desirable outcome over the possible range of variation of the radiation sensitivity of the patient. Such a stochastic optimization will also have to consider the distribution function of the radiation sensitivity and the larger steepness of the response for the individual patient. The results of stochastic optimization are also compared with simpler methods such as using biological response `margins' to account for the range of sensitivity variation. By using stochastic optimization, the absolute gain will typically be of the order of a few per cent and the relative improvement compared with non-stochastic optimization is generally less than about 10 per cent. The extent of this gain varies with the level of interpatient variability as well as with the difficulty and complexity of the case studied. Although the dose changes are rather small (<5 Gy) there is a strong desire to make treatment plans more robust, and tolerant of the likely range of variation of the radiation sensitivity of each individual patient. When more accurate predictive assays of the radiation sensitivity for each patient become available, the need to consider the range of variations can be reduced considerably.

  13. High-quality substrate for fluorescence enhancement using agarose-coated silica opal film.

    PubMed

    Xu, Ming; Li, Juan; Sun, Liguo; Zhao, Yuanjin; Xie, Zhuoying; Lv, Linli; Zhao, Xiangwei; Xiao, Pengfeng; Hu, Jing; Lv, Mei; Gu, Zhongze

    2010-08-01

    To improve the sensitivity of fluorescence detection in biochip, a new kind of substrates was developed by agarose coating on silica opal film. In this study, silica opal film was fabricated on glass substrate using the vertical deposition technique. It can provide stronger fluorescence signals and thus improve the detection sensitivity. After coating with agarose, the hybrid film could provide a 3D support for immobilizing sample. Comparing with agarose-coated glass substrate, the agarose-coated opal substrates could selectively enhance particular fluorescence signals with high sensitivity when the stop band of the silica opal film in the agarose-coated opal substrate overlapped the fluorescence emission wavelength. A DNA hybridization experiment demonstrated that fluorescence intensity of special type of agarose-coated opal substrates was about four times that of agarose-coated glass substrate. These results indicate that the optimized agarose-coated opal substrate can be used for improving the sensitivity of fluorescence detection with high quality and selectivity.

  14. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  15. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  16. Visible light photoreduction of CO.sub.2 using heterostructured catalysts

    DOEpatents

    Matranga, Christopher; Thompson, Robert L; Wang, Congjun

    2015-03-24

    The method provides for use of sensitized photocatalyst for the photocatalytic reduction of CO.sub.2 under visible light illumination. The photosensitized catalyst is comprised of a wide band gap semiconductor material, a transition metal co-catalyst, and a semiconductor sensitizer. The semiconductor sensitizer is photoexcited by visible light and forms a Type II band alignment with the wide band gap semiconductor material. The wide band gap semiconductor material and the semiconductor sensitizer may be a plurality of particles, and the particle diameters may be selected to accomplish desired band widths and optimize charge injection under visible light illumination by utilizing quantum size effects. In a particular embodiment, CO.sub.2 is reduced under visible light illumination using a CdSe/Pt/TiO2 sensitized photocatalyst with H.sub.2O as a hydrogen source.

  17. The anatomy of choice: active inference and agency

    PubMed Central

    Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback–Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action—constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution—that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control. PMID:24093015

  18. Sensitivity optimization of Bell-Bloom magnetometers by manipulation of atomic spin synchronization

    NASA Astrophysics Data System (ADS)

    Ranjbaran, M.; Tehranchi, M. M.; Hamidi, S. M.; Khalkhali, S. M. H.

    2018-05-01

    Many efforts have been devoted to the developments of atomic magnetometers for achieving the high sensitivity required in biomagnetic applications. To reach the high sensitivity, many types of atomic magnetometers have been introduced for optimization of the creation and relaxation rates of atomic spin polarization. In this paper, regards to sensitivity optimization techniques in the Mx configuration, we have proposed a novelty approach for synchronization of the spin precession in the Bell-Bloom magnetometers. We have utilized the phenomenological Bloch equations to simulate the spin dynamics when modulation of pumping light and radio frequency magnetic field were both used for atomic spin synchronization. Our results showed that the synchronization process, improved the magnetometer sensitivity respect to the classical configurations.

  19. Light collection optimization for composite photoanode in dye-sensitized solar cells: Towards higher efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, X. Z.; Shen, W. Z., E-mail: wzshen@sjtu.edu.cn; Laboratory of Condensed Matter Spectroscopy and Opto-Electronic Physics, and Key Laboratory of Artificial Structures and Quantum Control

    2015-06-14

    Composite photoanode comprising nanoparticles and one-dimensional (1D) nanostructure is a promising alternative to conventional photoanode for dye-sensitized solar cells (DSCs). Besides fast electron transport channels, the 1D nanostructure also plays as light scattering centers. Here, we theoretically investigate the light scattering properties of capsule-shaped 1D nanostructure and their influence on the light collection of DSCs. It is found that the far-field light scattering of a single capsule depends on its volume, shape, and orientation: capsules with bigger equivalent spherical diameter, smaller aspect ratio, and horizontal orientation demonstrate stronger light scattering especially at large scattering angle. Using Monte Carlo approach, wemore » simulated and optimized the light harvesting efficiency of the cell. Two multilayer composite photoanodes containing orderly or randomly oriented capsules are proposed. DSCs composed of these two photoanodes are promising for higher efficiencies because of their efficient light collection and superior electron collection. These results will provide practical guidance to the design and optimization of the photoanodes for DSCs.« less

  20. Airfoil Design Using a Coupled Euler and Integral Boundary Layer Method with Adjoint Based Sensitivities

    NASA Technical Reports Server (NTRS)

    Edwards, S.; Reuther, J.; Chattot, J. J.

    1997-01-01

    The objective of this paper is to present a control theory approach for the design of airfoils in the presence of viscous compressible flows. A coupled system of the integral boundary layer and the Euler equations is solved to provide rapid flow simulations. An adjunct approach consistent with the complete coupled state equations is employed to obtain the sensitivities needed to drive a numerical optimization algorithm. Design to target pressure distribution is demonstrated on an RAE 2822 airfoil at transonic speed.

  1. Molecular Diagnostic Testing for Aspergillus

    PubMed Central

    Powers-Fletcher, Margaret V.

    2016-01-01

    The direct detection of Aspergillus nucleic acid in clinical specimens has the potential to improve the diagnosis of aspergillosis by offering more rapid and sensitive identification of invasive infections than is possible with traditional techniques, such as culture or histopathology. Molecular tests for Aspergillus have been limited historically by lack of standardization and variable sensitivities and specificities. Recent efforts have been directed at addressing these limitations and optimizing assay performance using a variety of specimen types. This review provides a summary of standardization efforts and outlines the complexities of molecular testing for Aspergillus in clinical mycology. PMID:27487954

  2. Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel

    CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less

  3. Sensory optimization by stochastic tuning.

    PubMed

    Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees

    2013-10-01

    Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Lyapunov exponents, covariant vectors and shadowing sensitivity analysis of 3D wakes: from laminar to chaotic regimes

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick

    2016-11-01

    Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.

  5. [Optimized application of nested PCR method for detection of malaria].

    PubMed

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  6. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  7. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  8. Searches for millisecond pulsations in low-mass X-ray binaries

    NASA Technical Reports Server (NTRS)

    Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.

    1991-01-01

    High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.

  9. Computational design of a pH-sensitive IgG binding protein.

    PubMed

    Strauch, Eva-Maria; Fleishman, Sarel J; Baker, David

    2014-01-14

    Computational design provides the opportunity to program protein-protein interactions for desired applications. We used de novo protein interface design to generate a pH-dependent Fc domain binding protein that buries immunoglobulin G (IgG) His-433. Using next-generation sequencing of naïve and selected pools of a library of design variants, we generated a molecular footprint of the designed binding surface, confirming the binding mode and guiding further optimization of the balance between affinity and pH sensitivity. In biolayer interferometry experiments, the optimized design binds IgG with a Kd of ∼ 4 nM at pH 8.2, and approximately 500-fold more weakly at pH 5.5. The protein is extremely stable, heat-resistant and highly expressed in bacteria, and allows pH-based control of binding for IgG affinity purification and diagnostic devices.

  10. Towards an atrio-ventricular delay optimization assessed by a computer model for cardiac resynchronization therapy

    NASA Astrophysics Data System (ADS)

    Ojeda, David; Le Rolle, Virginie; Tse Ve Koon, Kevin; Thebault, Christophe; Donal, Erwan; Hernández, Alfredo I.

    2013-11-01

    In this paper, lumped-parameter models of the cardiovascular system, the cardiac electrical conduction system and a pacemaker are coupled to generate mitral ow pro les for di erent atrio-ventricular delay (AVD) con gurations, in the context of cardiac resynchronization therapy (CRT). First, we perform a local sensitivity analysis of left ventricular and left atrial parameters on mitral ow characteristics, namely E and A wave amplitude, mitral ow duration, and mitral ow time integral. Additionally, a global sensitivity analysis over all model parameters is presented to screen for the most relevant parameters that a ect the same mitral ow characteristics. Results provide insight on the in uence of left ventricle and atrium in uence on mitral ow pro les. This information will be useful for future parameter estimation of the model that could reproduce the mitral ow pro les and cardiovascular hemodynamics of patients undergoing AVD optimization during CRT.

  11. Aerolastic tailoring and integrated wing design

    NASA Technical Reports Server (NTRS)

    Love, Mike H.; Bohlmann, Jon

    1989-01-01

    Much has been learned from the TSO optimization code over the years in determining aeroelastic tailoring's place in the integrated design process. Indeed, it has become apparent that aeroelastic tailoring is and should be deeply embedded in design. Aeroelastic tailoring can have tremendous effects on the design loads, and design loads affect every aspect of the design process. While optimization enables the evaluation of design sensitivities, valid computational simulations are required to make these sensitivities valid. Aircraft maneuvers simulated must adequately cover the plane's intended flight envelope, realistic design criteria must be included, and models among the various disciplines must be calibrated among themselves and with any hard-core (e.g., wind tunnel) data available. The information gained and benefits derived from aeroelastic tailoring provide a focal point for the various disciplines to become involved and communicate with one another to reach the best design possible.

  12. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. A competitive chemiluminescence enzyme immunoassay for rapid and sensitive determination of enrofloxacin

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Wu, Yongjun; Yu, Songcheng; Zhang, Huili; Zhang, Hongquan; Qu, Lingbo; Harrington, Peter de B.

    With alkaline phosphatase (ALP)-adamantane (AMPPD) system as the chemiluminescence (CL) detection system, a highly sensitive, specific and simple competitive chemiluminescence enzyme immunoassay (CLEIA) was developed for the measurement of enrofloxacin (ENR). The physicochemical parameters, such as the chemiluminescent assay mediums, the dilution buffer of ENR-McAb, the volume of dilution buffer, the monoclonal antibody concentration, the incubation time, and other relevant variables of the immunoassay have been optimized. Under the optimal conditions, the detection linear range of 350-1000 pg/mL and the detection limit of 0.24 ng/mL were provided by the proposed method. The relative standard deviations were less than 15% for both intra and inter-assay precision. This method has been successfully applied to determine ENR in spiked samples with the recovery of 103%-96%. It showed that CLEIA was a good potential method in the analysis of residues of veterinary drugs after treatment of related diseases.

  14. Enhanced detection of type C botulinum neurotoxin by the Endopep-MS assay through optimization of peptide substrates

    PubMed Central

    Wang, Dongxia; Krilich, Joan; Baudys, Jakub; Barr, John R.; Kalb, Suzanne R.

    2015-01-01

    It is essential to have a simple, quick and sensitive method for the detection and quantification of botulinum neurotoxins, the most toxic substances and the causative agents of botulism. Type C botulinum neurotoxin (BoNT/C) represents one of the seven members of distinctive BoNT serotypes (A to G) that cause botulism in animals and avians. Here we report the development of optimized peptide substrates for improving the detection of BoNT/C and /CD mosaic toxins using an Endopep-MS assay, a mass spectrometry-based method that is able to rapidly and sensitively detect and differentiate all types of BoNTs by extracting the toxin with specific antibodies and detecting the unique cleavage products of peptide substrates. Based on the sequence of a short SNAP-25 peptide, we conducted optimization through a comprehensive process including length determination, terminal modification, single and multiple amino acid residue substitution, and incorporation of unnatural amino acid residues. Our data demonstrate that an optimal peptide provides a more than 200-fold improvement over the substrate currently used in the Endopep-MS assay for the detection of BoNT/C1 and /CD mosaic. Using the new substrate in a four-hour cleavage reaction, the limit of detection for the BoNT/C1 complex spiked in buffer, serum and milk samples was determined to be 0.5, 0.5 and 1 mouseLD50/mL, respectively, representing a similar or higher sensitivity than that obtained by traditional mouse bioassay. PMID:25913863

  15. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  16. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  17. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  18. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  19. Theoretical study on second-harmonic generation of focused vortex beams

    NASA Astrophysics Data System (ADS)

    Tang, Daolong; Wang, Jing; Ma, Jingui; Zhou, Bingjie; Yuan, Peng; Xie, Guoqiang; Zhu, Heyuan; Qian, Liejia

    2018-03-01

    Second-harmonic generation (SHG) provides a promising route for generating vortex beams of both short wavelength and large topological charge. Here we theoretically investigate the efficiency optimization and beam characteristics of focused vortex-beam SHG. Owing to the increasing beam divergence, vortex beams have distinct features in SHG optimization compared with a Gaussian beam. We show that, under the noncritical phase-matching condition, the Boyd and Kleinman prediction of the optimal focusing parameter for Gaussian-beam SHG remains valid for vortex-beam SHG. However, under the critical phase-matching condition, which is sensitive to the beam divergence, the Boyd and Kleinman prediction is no longer valid. In contrast, the optimal focusing parameter for maximizing the SHG efficiency strongly depends on the vortex order. We also investigate the effects of focusing and phase-matching conditions on the second-harmonic beam characteristics.

  20. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  1. Optimization of vascular-targeting drugs in a computational model of tumor growth

    NASA Astrophysics Data System (ADS)

    Gevertz, Jana

    2012-04-01

    A biophysical tool is introduced that seeks to provide a theoretical basis for helping drug design teams assess the most promising drug targets and design optimal treatment strategies. The tool is grounded in a previously validated computational model of the feedback that occurs between a growing tumor and the evolving vasculature. In this paper, the model is particularly used to explore the therapeutic effectiveness of two drugs that target the tumor vasculature: angiogenesis inhibitors (AIs) and vascular disrupting agents (VDAs). Using sensitivity analyses, the impact of VDA dosing parameters is explored, as is the effects of administering a VDA with an AI. Further, a stochastic optimization scheme is utilized to identify an optimal dosing schedule for treatment with an AI and a chemotherapeutic. The treatment regimen identified can successfully halt simulated tumor growth, even after the cessation of therapy.

  2. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  3. Shot noise-limited Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry

    NASA Astrophysics Data System (ADS)

    Chen, Shichao; Zhu, Yizheng

    2017-02-01

    Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.

  4. O2 Plasma Etching and Antistatic Gun Surface Modifications for CNT Yarn Microelectrode Improve Sensitivity and Antifouling Properties.

    PubMed

    Yang, Cheng; Wang, Ying; Jacobs, Christopher B; Ivanov, Ilia N; Venton, B Jill

    2017-05-16

    Carbon nanotube (CNT) based microelectrodes exhibit rapid and selective detection of neurotransmitters. While different fabrication strategies and geometries of CNT microelectrodes have been characterized, relatively little research has investigated ways to selectively enhance their electrochemical properties. In this work, we introduce two simple, reproducible, low-cost, and efficient surface modification methods for carbon nanotube yarn microelectrodes (CNTYMEs): O 2 plasma etching and antistatic gun treatment. O 2 plasma etching was performed by a microwave plasma system with oxygen gas flow and the optimized time for treatment was 1 min. The antistatic gun treatment flows ions by the electrode surface; two triggers of the antistatic gun was the optimized number on the CNTYME surface. Current for dopamine at CNTYMEs increased 3-fold after O 2 plasma etching and 4-fold after antistatic gun treatment. When the two treatments were combined, the current increased 12-fold, showing the two effects are due to independent mechanisms that tune the surface properties. O 2 plasma etching increased the sensitivity due to increased surface oxygen content but did not affect surface roughness while the antistatic gun treatment increased surface roughness but not oxygen content. The effect of tissue fouling on CNT yarns was studied for the first time, and the relatively hydrophilic surface after O 2 plasma etching provided better resistance to fouling than unmodified or antistatic gun treated CNTYMEs. Overall, O 2 plasma etching and antistatic gun treatment improve the sensitivity of CNTYMEs by different mechanisms, providing the possibility to tune the CNTYME surface and enhance sensitivity.

  5. An optimal policy for deteriorating items with time-proportional deterioration rate and constant and time-dependent linear demand rate

    NASA Astrophysics Data System (ADS)

    Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu

    2017-12-01

    In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.

  6. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  7. Imaging of high-energy x-ray emission from cryogenic thermonuclear fuel implosions on the NIF.

    PubMed

    Ma, T; Izumi, N; Tommasini, R; Bradley, D K; Bell, P; Cerjan, C J; Dixit, S; Döppner, T; Jones, O; Kline, J L; Kyrala, G; Landen, O L; LePape, S; Mackinnon, A J; Park, H-S; Patel, P K; Prasad, R R; Ralph, J; Regan, S P; Smalyuk, V A; Springer, P T; Suter, L; Town, R P J; Weber, S V; Glenzer, S H

    2012-10-01

    Accurately assessing and optimizing the implosion performance of inertial confinement fusion capsules is a crucial step to achieving ignition on the NIF. We have applied differential filtering (matched Ross filter pairs) to provide broadband time-integrated absolute x-ray self-emission images of the imploded core of cryogenic layered implosions. This diagnostic measures the temperature- and density-sensitive bremsstrahlung emission and provides estimates of hot spot mass, mix mass, and pressure.

  8. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  9. Risk-Sensitivity in Sensorimotor Control

    PubMed Central

    Braun, Daniel A.; Nagengast, Arne J.; Wolpert, Daniel M.

    2011-01-01

    Recent advances in theoretical neuroscience suggest that motor control can be considered as a continuous decision-making process in which uncertainty plays a key role. Decision-makers can be risk-sensitive with respect to this uncertainty in that they may not only consider the average payoff of an outcome, but also consider the variability of the payoffs. Although such risk-sensitivity is a well-established phenomenon in psychology and economics, it has been much less studied in motor control. In fact, leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost. Here we review evidence that humans exhibit risk-sensitivity in their motor behaviors, thereby demonstrating sensitivity to the variability of “motor costs.” Furthermore, we discuss how risk-sensitivity can be incorporated into optimal feedback control models of motor control. We conclude that risk-sensitivity is an important concept in understanding individual motor behavior under uncertainty. PMID:21283556

  10. Optimizing lay counsellor services for chronic care in South Africa: a qualitative systematic review.

    PubMed

    Petersen, Inge; Fairall, Lara; Egbe, Catherine O; Bhana, Arvin

    2014-05-01

    To conduct a qualitative systematic review on the use of lay counsellors in South Africa to provide lessons on optimizing their use for psychological and behavioural change counselling for chronic long-term care in scare-resource contexts. A qualitative systematic review of the literature on lay counsellor services in South Africa. Twenty-nine studies met the inclusion criteria. Five randomized control trials and two cohort studies reported that lay counsellors can provide behaviour change counselling with good outcomes. One multi-centre cohort study provided promising evidence of improved anti-retroviral treatment adherence and one non-randomized controlled study provided promising results for counselling for depression. Six studies found low fidelity of lay counsellor-delivered interventions in routine care. Reasons for low fidelity include poor role definition, inconsistent remuneration, lack of standardized training, and poor supervision and logistical support. Within resource-constrained settings, adjunct behaviour change and psychological services provided by lay counsellors can be harnessed to promote chronic care at primary health care level. Optimizing lay counsellor services requires interventions at an organizational level that provide a clear role definition and scope of practice; in-service training and formal supervision; and sensitization of health managers to the importance and logistical requirements of counselling. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Modelling and optimization of a wellhead gas flowmeter using concentric pipes

    NASA Astrophysics Data System (ADS)

    Nec, Yana; Huculak, Greg

    2017-09-01

    A novel configuration of a landfill wellhead was analysed to measure the flow rate of gas extracted from sanitary landfills. The device provides access points for pressure measurement integral to flow rate computation similarly to orifice and Venturi meters, and has the advantage of eliminating the problem of water condensation often impairing the accuracy thereof. It is proved that the proposed configuration entails comparable computational complexity and negligible sensitivity to geometric parameters. Calibration for the new device was attained using a custom optimization procedure, operating on a quadri-dimensional parameter surface evincing discontinuity and non-smoothness.

  12. Design enhancement tools in MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.

    1984-01-01

    Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).

  13. An ultra-sensitive Au nanoparticles functionalized DNA biosensor for electrochemical sensing of mercury ions.

    PubMed

    Zhang, Yanyan; Zhang, Cong; Ma, Rui; Du, Xin; Dong, Wenhao; Chen, Yuan; Chen, Qiang

    2017-06-01

    The present work describes an effective strategy to fabricate a highly sensitive and selective DNA-biosensor for the determination of mercury ions (Hg 2+ ). The DNA 1 was modified onto the surface of Au electrode by the interaction between sulfydryl group and Au electrode. DNA probe is complementary with DNA 1. In the presence of Hg 2+ , the electrochemical signal increases owing to that Hg 2+ -mediated thymine bases induce the conformation of DNA probe to change from line to hairpin and less DNA probes adsorb into DNA 1. Taking advantage of its reduction property, methylene blue is considered as the signal indicating molecule. For improving the sensitivity of the biosensor, Au nanoparticles (Au NPs) modified reporter DNA 3 is used to adsorb DNA 1. Electrochemical behaviors of the biosensor were evaluated by electrochemical impedance spectroscopy and cyclic voltammetry. Several important parameters which could affect the property of the biosensor were studied and optimized. Under the optimal conditions, the biosensor exhibits wide linear range, high sensitivity and low detection limit. Besides, it displays superior selectivity and excellent stability. The biosensor was also applied for water sample detection with satisfactory result. The novel strategy of fabricating biosensor provides a potential platform for fabricating a variety of metal ions biosensors. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  15. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  16. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  17. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  18. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    NASA Astrophysics Data System (ADS)

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.

  19. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  20. Quantifying the bending of bilayer temperature-sensitive hydrogels

    NASA Astrophysics Data System (ADS)

    Dong, Chenling; Chen, Bin

    2017-04-01

    Stimuli-responsive hydrogels can serve as manipulators, including grippers, sensors, etc., where structures can undergo significant bending. Here, a finite-deformation theory is developed to quantify the evolution of the curvature of bilayer temperature-sensitive hydrogels when subjected to a temperature change. Analysis of the theory indicates that there is an optimal thickness ratio to acquire the largest curvature in the bilayer and also suggests that the sign or the magnitude of the curvature can be significantly affected by pre-stretches or small pores in the bilayer. This study may provide important guidelines in fabricating temperature-responsive bilayers with desirable mechanical performance.

  1. ACTPol: On-Sky Performance and Characterization

    NASA Technical Reports Server (NTRS)

    Grace, E.; Beall, J.; Bond, J. R.; Cho, H. M.; Datta, R.; Devlin, M. J.; Dunner, R.; Fox, A. E.; Gallardo, P.; Hasselfield, M.; hide

    2014-01-01

    ACTPol is the polarization-sensitive receiver on the Atacama Cosmology Telescope. ACTPol enables sensitive millimeter wavelength measurements of the temperature and polarization anisotropies of the Cosmic Microwave Background (CMB) at arcminute angular scales. These measurements are designed to explore the process of cosmic structure formation, constrain or determine the sum of the neutrino masses, probe dark energy, and provide a foundation for a host of other cosmological tests. We present an overview of the first season of ACTPol observations focusing on the optimization and calibration of the first detector array as well as detailing the on-sky performance.

  2. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    DTIC Science & Technology

    1979-08-01

    fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the

  3. Ecohydrological optimality in the Northeast China Transect

    NASA Astrophysics Data System (ADS)

    Cong, Zhentao; Li, Qinshu; Mo, Kangle; Zhang, Lexin; Shen, Hong

    2017-05-01

    The Northeast China Transect (NECT) is one of the International Geosphere-Biosphere Program (IGBP) terrestrial transects, where there is a significant precipitation gradient from east to west, as well as a vegetation transition of forest-grassland-desert. It is remarkable to understand vegetation distribution and dynamics under climate change in this transect. We take canopy cover (M), derived from Normalized Difference Vegetation Index (NDVI), as an index to describe the properties of vegetation distribution and dynamics in the NECT. In Eagleson's ecohydrological optimality theory, the optimal canopy cover (M*) is determined by the trade-off between water supply depending on water balance and water demand depending on canopy transpiration. We apply Eagleson's ecohydrological optimality method in the NECT based on data from 2000 to 2013 to get M*, which is compared with M from NDVI to further discuss the sensitivity of M* to vegetation properties and climate factors. The result indicates that the average M* fits the actual M well (for forest, M* = 0.822 while M = 0.826; for grassland, M* = 0.353 while M = 0.352; the correlation coefficient between M and M* is 0.81). Results of water balance also match the field-measured data in the references. The sensitivity analyses show that M* decreases with the increase of leaf area index (LAI), stem fraction and temperature, while it increases with the increase of leaf angle and precipitation amount. Eagleson's ecohydrological optimality method offers a quantitative way to understand the impacts of climate change on canopy cover and provides guidelines for ecorestoration projects.

  4. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.

    PubMed

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-04-17

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.

  5. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  6. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  7. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  8. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    PubMed

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  9. Methodologies for optimal resource allocation to the national space program and new space utilizations. Volume 1: Technical description

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.

  10. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines

    PubMed Central

    Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert

    2012-01-01

    We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845

  12. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  13. An automated approach to magnetic divertor configuration design

    NASA Astrophysics Data System (ADS)

    Blommaert, M.; Dekeyser, W.; Baelmans, M.; Gauger, N. R.; Reiter, D.

    2015-01-01

    Automated methods based on optimization can greatly assist computational engineering design in many areas. In this paper an optimization approach to the magnetic design of a nuclear fusion reactor divertor is proposed and applied to a tokamak edge magnetic configuration in a first feasibility study. The approach is based on reduced models for magnetic field and plasma edge, which are integrated with a grid generator into one sensitivity code. The design objective chosen here for demonstrative purposes is to spread the divertor target heat load as much as possible over the entire target area. Constraints on the separatrix position are introduced to eliminate physically irrelevant magnetic field configurations during the optimization cycle. A gradient projection method is used to ensure stable cost function evaluations during optimization. The concept is applied to a configuration with typical Joint European Torus (JET) parameters and it automatically provides plausible configurations with reduced heat load.

  14. Optimizing water purchases for an Environmental Water Account

    NASA Astrophysics Data System (ADS)

    Lund, J. R.; Hollinshead, S. P.

    2005-12-01

    State and federal agencies in California have established an Environmental Water Account (EWA) to buy water to protect endangered fish in the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary. This paper presents a three-stage probabilistic optimization model that identifies least-cost strategies for purchasing water for the EWA given hydrologic, operational, and biological uncertainties. This approach minimizes the expected cost of long-term, spot, and option water purchases to meet uncertain flow dedications for fish. The model prescribes the location, timing, and type of optimal water purchases and can illustrate how least-cost strategies change with hydrologic, operational, biological, and cost inputs. Details of the optimization model's application to California's EWA are provided with a discussion of its utility for strategic planning and policy purposes. Limitations in and sensitivity analysis of the model's representation of EWA operations are discussed, as are operational and research recommendations.

  15. Optimization of Water Resources and Agricultural Activities for Economic Benefit in Colorado

    NASA Astrophysics Data System (ADS)

    LIM, J.; Lall, U.

    2017-12-01

    The limited water resources available for irrigation are a key constraint for the important agricultural sector of Colorado's economy. As climate change and groundwater depletion reshape these resources, it is essential to understand the economic potential of water resources under different agricultural production practices. This study uses a linear programming optimization at the county spatial scale and annual temporal scales to study the optimal allocation of water withdrawal and crop choices. The model, AWASH, reflects streamflow constraints between different extraction points, six field crops, and a distinct irrigation decision for maize and wheat. The optimized decision variables, under different environmental, social, economic, and physical constraints, provide long-term solutions for ground and surface water distribution and for land use decisions so that the state can generate the maximum net revenue. Colorado, one of the largest agricultural producers, is tested as a case study and the sensitivity on water price and on climate variability is explored.

  16. A Hierarchical Mechanism of RIG-I Ubiquitination Provides Sensitivity, Robustness and Synergy in Antiviral Immune Responses.

    PubMed

    Sun, Xiaoqiang; Xian, Huifang; Tian, Shuo; Sun, Tingzhe; Qin, Yunfei; Zhang, Shoutao; Cui, Jun

    2016-07-08

    RIG-I is an essential receptor in the initiation of the type I interferon (IFN) signaling pathway upon viral infection. Although K63-linked ubiquitination plays an important role in RIG-I activation, the optimal modulation of conjugated and unanchored ubiquitination of RIG-I as well as its functional implications remains unclear. In this study, we determined that, in contrast to the RIG-I CARD domain, full-length RIG-I must undergo K63-linked ubiquitination at multiple sites to reach full activity. A systems biology approach was designed based on experiments using full-length RIG-I. Model selection for 7 candidate mechanisms of RIG-I ubiquitination inferred a hierarchical architecture of the RIG-I ubiquitination mode, which was then experimentally validated. Compared with other mechanisms, the selected hierarchical mechanism exhibited superior sensitivity and robustness in RIG-I-induced type I IFN activation. Furthermore, our model analysis and experimental data revealed that TRIM4 and TRIM25 exhibited dose-dependent synergism. These results demonstrated that the hierarchical mechanism of multi-site/type ubiquitination of RIG-I provides an efficient, robust and optimal synergistic regulatory module in antiviral immune responses.

  17. A Hierarchical Mechanism of RIG-I Ubiquitination Provides Sensitivity, Robustness and Synergy in Antiviral Immune Responses

    PubMed Central

    Sun, Xiaoqiang; Xian, Huifang; Tian, Shuo; Sun, Tingzhe; Qin, Yunfei; Zhang, Shoutao; Cui, Jun

    2016-01-01

    RIG-I is an essential receptor in the initiation of the type I interferon (IFN) signaling pathway upon viral infection. Although K63-linked ubiquitination plays an important role in RIG-I activation, the optimal modulation of conjugated and unanchored ubiquitination of RIG-I as well as its functional implications remains unclear. In this study, we determined that, in contrast to the RIG-I CARD domain, full-length RIG-I must undergo K63-linked ubiquitination at multiple sites to reach full activity. A systems biology approach was designed based on experiments using full-length RIG-I. Model selection for 7 candidate mechanisms of RIG-I ubiquitination inferred a hierarchical architecture of the RIG-I ubiquitination mode, which was then experimentally validated. Compared with other mechanisms, the selected hierarchical mechanism exhibited superior sensitivity and robustness in RIG-I-induced type I IFN activation. Furthermore, our model analysis and experimental data revealed that TRIM4 and TRIM25 exhibited dose-dependent synergism. These results demonstrated that the hierarchical mechanism of multi-site/type ubiquitination of RIG-I provides an efficient, robust and optimal synergistic regulatory module in antiviral immune responses. PMID:27387525

  18. A Hierarchical Mechanism of RIG-I Ubiquitination Provides Sensitivity, Robustness and Synergy in Antiviral Immune Responses

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Xian, Huifang; Tian, Shuo; Sun, Tingzhe; Qin, Yunfei; Zhang, Shoutao; Cui, Jun

    2016-07-01

    RIG-I is an essential receptor in the initiation of the type I interferon (IFN) signaling pathway upon viral infection. Although K63-linked ubiquitination plays an important role in RIG-I activation, the optimal modulation of conjugated and unanchored ubiquitination of RIG-I as well as its functional implications remains unclear. In this study, we determined that, in contrast to the RIG-I CARD domain, full-length RIG-I must undergo K63-linked ubiquitination at multiple sites to reach full activity. A systems biology approach was designed based on experiments using full-length RIG-I. Model selection for 7 candidate mechanisms of RIG-I ubiquitination inferred a hierarchical architecture of the RIG-I ubiquitination mode, which was then experimentally validated. Compared with other mechanisms, the selected hierarchical mechanism exhibited superior sensitivity and robustness in RIG-I-induced type I IFN activation. Furthermore, our model analysis and experimental data revealed that TRIM4 and TRIM25 exhibited dose-dependent synergism. These results demonstrated that the hierarchical mechanism of multi-site/type ubiquitination of RIG-I provides an efficient, robust and optimal synergistic regulatory module in antiviral immune responses.

  19. Intraoperative Detection of Cell Injury and Cell Death with an 800 nm Near-Infrared Fluorescent Annexin V Derivative

    PubMed Central

    Ohnishi, Shunsuke; Vanderheyden, Jean-Luc; Tanaka, Eiichi; Patel, Bhavesh; De Grand, Alec; Laurence, Rita G.; Yamashita, Kenichiro; Frangioni, John V.

    2008-01-01

    The intraoperative detection of cell injury and cell death is fundamental to human surgeries such as organ transplantation and resection. Because of low autofluorescence background and relatively high tissue penetration, invisible light in the 800 nm region provides sensitive detection of disease pathology without changing the appearance of the surgical field. In order to provide surgeons with real-time intraoperative detection of cell injury and death after ischemia/reperfusion (I/R), we have developed a bioactive derivative of human annexin V (annexin800), which fluoresces at 800 nm. Total fluorescence yield, as a function of bioactivity, was optimized in vitro, and final performance was assessed in vivo. In liver, intestine and heart animal models of I/R, an optimal signal to background ratio was obtained 30 min after intravenous injection of annexin800, and histology confirmed concordance between planar reflectance images and actual deep tissue injury. In summary, annexin800 permits sensitive, real-time detection of cell injury and cell death after I/R in the intraoperative setting, and can be used during a variety of surgeries for rapid assessment of tissue and organ status. PMID:16869796

  20. Automated diagnosis of coronary artery disease based on data mining and fuzzy modeling.

    PubMed

    Tsipouras, Markos G; Exarchos, Themis P; Fotiadis, Dimitrios I; Kotsia, Anna P; Vakalis, Konstantinos V; Naka, Katerina K; Michalis, Lampros K

    2008-07-01

    A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made.

  1. Reference intervals for plasma free metanephrines with an age adjustment for normetanephrine for optimized laboratory testing of phaeochromocytoma.

    PubMed

    Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M

    2013-01-01

    Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Blood samples were collected in the supine position from 1226 subjects, aged 5-84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results.

  2. Reference intervals for plasma free metanephrines with an age adjustment for normetanephrine for optimized laboratory testing of phaeochromocytoma

    PubMed Central

    Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M

    2016-01-01

    Background Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Methods Blood samples were collected in the supine position from 1226 subjects, aged 5–84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Results Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. Conclusions These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results. PMID:23065528

  3. Performance optimization of dye-sensitized solar cells by multilayer gradient scattering architecture of TiO2 microspheres.

    PubMed

    Li, Mingyue; Li, Meiya; Liu, Xiaolian; Bai, Lihua; Luoshan, Mengdai; Lei, Wen; Wang, Zhen; Zhu, Yongdan; Zhao, Xingzhong

    2017-01-20

    TiO 2 microspheres (TMSs) with unique hierarchical structure and unusual high specific surface area are synthesized and incorporated into a photoanode in various TMS multilayer gradient architectures to form novel photoanodes and dye-sensitized solar cells (DSSCs). Significant influences of these architectures on the photoelectric properties of DSSCs are obtained. The DSSC with the optimal TMS gradient-ascent architecture of M036 has the largest amounts of dye absorption, strongest light absorption, longest electron lifetime and lowest electron recombination, and thus exhibits the maximum short circuit current density (J sc ) of 16.49 mA cm -2 and photoelectric conversion efficiency (η) of 7.01%, notably higher than those of conventional DSSCs by 21% and 22%, respectively. These notable improvements in the properties of DSSCs can be attributed to the TMS gradient-ascent architecture of M036 which can most effectively increase dye absorption and localize incident light within the photoanode by the light scattering of TMSs, and thus utilize the incident light thoroughly. This study provides an optimized and universal configuration for the scattering microspheres incorporated in the hybrid photoanode, which can significantly improve the performance of DSSCs.

  4. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  5. Indirect enzyme-linked immunosorbent assay method based on Streptococcus agalactiae rSip-Pgk-FbsA fusion protein for detection of bovine mastitis.

    PubMed

    Bu, Ri-E; Wang, Jin-Liang; Wu, Jin-Hua; Xilin, Gao-Wa; Chen, Jin-Long; Wang, Hua

    2017-03-01

    The aim of this study was to establish a rapid and accurate method for the detection of the Streptococcus agalactiae antibody (SA-Ab) to determine the presence of the bovine mastitis (BM)-causative pathogen. The multi-subunit fusion protein rSip-Pgk-FbsA was prokaryotically expressed and purified. The triple activities of the membrane surface-associated proteins Sip, phosphoglycerate kinase (Pgk), and fibronectin (FbsA) were used as the diagnostic antigens to establish an indirect enzyme-linked immunosorbent assay (ELISA) method for the detection of SA-Ab in BM. The optimal antigen coating concentration was 2 μg/mL, the optimal serum dilution was 1:160, and the optimal dilution of the enzyme-labeled secondary antibody was 1:6000. The sensitivity, specificity, and repeatability tests showed that the method established in this study had no cross-reaction with antibodies to Streptococcus pyogenes, Escherichia coli, Staphylococcus aureus, and Staphylococcus epidermidis in the sera. The results of the sensitivity test showed that a positive result could be obtained even if the serum dilution reached 1:12,800, indicating the high sensitivity and good repeatability of the method. The positive coincidence rate of this method was 98.6%, which is higher than that of previous tests established with the Sip or Pgk mono-antigen fusion protein, respectively, demonstrating the relatively higher sensitivity of this newly established method. The detection rate for 389 clinical samples was 46.53%. The indirect ELISA method established in this study could provide a more accurate and reliable serological method for the rapid detection of S. agalactiae in cases of BM.

  6. SU-F-J-06: Optimized Patient Inclusion for NaF PET Response-Based Biopsies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, A; Harmon, S; Perk, T

    Purpose: A method to guide mid-treatment biopsies using quantitative [F-18]NaF PET/CT response is being investigated in a clinical trial. This study aims to develop methodology to identify patients amenable to mid-treatment biopsy based on pre-treatment imaging characteristics. Methods: 35 metastatic prostate cancer patients had NaF PET/CT scans taken prior to the start of treatment and 9–12 weeks into treatment. For mid-treatment biopsy targeting, lesions must be at least 1.5 cm{sup 3} and located in a clinically feasible region (lumbar/sacral spine, pelvis, humerus, or femur). Three methods were developed based on number of lesions present prior to treatment: a feasibility-restricted method,more » a location-restricted method, and an unrestricted method. The feasibility restricted method only utilizes information from lesions meeting biopsy requirements in the pre-treatment scan. The unrestricted method accounts for all lesions present in the pre-treatment scan. For each method, optimized classification cutoffs for candidate patients were determined. Results: 13 of the 35 patients had enough lesions at the mid-treatment for biopsy candidacy. Of 1749 lesions identified in all 35 patients at mid-treatment, only 9.8% were amenable to biopsy. Optimizing the feasibility-restricted method required 4 lesions at pre-treatment meeting volume and region requirements for biopsy, resulting patient identification sensitivity of 0.8 and specificity of 0.7. Of 6 false positive patients, only one patient lacked lesions for biopsy. Restricting for location alone showed poor results (sensitivity 0.2 and specificity 0.3). The optimized unrestricted method required patients have at least 37 lesions in pretreatment scan, resulting in a sensitivity of 0.8 and specificity of 0.8. There were 5 false positives, only one lacked lesions for biopsy. Conclusion: Incorporating the overall pre-treatment number of NaF PET/CT identified lesions provided best prediction for identifying candidate patients for mid-treatment biopsy. This study provides validity for prediction-based inclusion criteria that can be extended to various clinical trial scenarios. Funded by Prostate Cancer Foundation.« less

  7. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  8. Optimizing decentralized production-distribution planning problem in a multi-period supply chain network under uncertainty

    NASA Astrophysics Data System (ADS)

    Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi

    2017-09-01

    Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.

  9. Optimal design of an electro-hydraulic valve for heavy-duty vehicle clutch actuator with certain constraints

    NASA Astrophysics Data System (ADS)

    Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui

    2016-02-01

    The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.

  10. Basic design of MRM assays for peptide quantification.

    PubMed

    James, Andrew; Jorgensen, Claus

    2010-01-01

    With the recent availability and accessibility of mass spectrometry for basic and clinical research, the requirement for stable, sensitive, and reproducible assays to specifically detect proteins of interest has increased. Multiple reaction monitoring (MRM) or selective reaction monitoring (SRM) is a highly selective, sensitive, and robust assay to monitor the presence and amount of biomolecules. Until recently, MRM was typically used for the detection of drugs and other biomolecules from body fluids. With increased focus on biomarkers and systems biology approaches, researchers in the proteomics field have taken advantage of this approach. In this chapter, we will introduce the reader to the basic principle of designing and optimizing an MRM workflow. We provide examples of MRM workflows for standard proteomic samples and provide suggestions for the reader who is interested in using MRM for quantification.

  11. A stochastic optimal feedforward and feedback control methodology for superagility

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.

    1992-01-01

    A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.

  12. Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity.

    PubMed

    Maniscalco, Brian; Peters, Megan A K; Lau, Hakwan

    2016-04-01

    Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.

  13. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  14. Proteomic biomarkers apolipoprotein A1, truncated transthyretin and connective tissue activating protein III enhance the sensitivity of CA125 for detecting early stage epithelial ovarian cancer.

    PubMed

    Clarke, Charlotte H; Yip, Christine; Badgwell, Donna; Fung, Eric T; Coombes, Kevin R; Zhang, Zhen; Lu, Karen H; Bast, Robert C

    2011-09-01

    The low prevalence of ovarian cancer demands both high sensitivity (>75%) and specificity (99.6%) to achieve a positive predictive value of 10% for successful early detection. Utilizing a two stage strategy where serum marker(s) prompt the performance of transvaginal sonography (TVS) in a limited number (2%) of women could reduce the requisite specificity for serum markers to 98%. We have attempted to improve sensitivity by combining CA125 with proteomic markers. Sera from 41 patients with early stage (I/II) and 51 with late stage (III/IV) epithelial ovarian cancer, 40 with benign disease and 99 healthy individuals, were analyzed to measure 7 proteins [Apolipoprotein A1 (Apo-A1), truncated transthyretin (TT), transferrin, hepcidin, ß-2-microglobulin (ß2M), Connective Tissue Activating Protein III (CTAPIII), and Inter-alpha-trypsin inhibitor heavy chain 4 (ITIH4)]. Statistical models were fit by logistic regression, followed by optimization of factors retained in the models determined by optimizing the Akaike Information Criterion. A validation set included 136 stage I ovarian cancers, 140 benign pelvic masses and 174 healthy controls. In a training set analysis, the 3 most effective biomarkers (Apo-A1, TT and CTAPIII) exhibited 54% sensitivity at 98% specificity, CA125 alone produced 68% sensitivity and the combination increased sensitivity to 88%. In a validation set, the marker panel plus CA125 produced a sensitivity of 84% at 98% specificity (P=0.015, McNemar's test). Combining a panel of proteomic markers with CA125 could provide a first step in a sequential two-stage strategy with TVS for early detection of ovarian cancer. Copyright © 2011. Published by Elsevier Inc.

  15. Proteomic Biomarkers Apolipoprotein A1, Truncated Transthyretin and Connective Tissue Activating Protein III Enhance the Sensitivity of CA125 for Detecting Early Stage Epithelial Ovarian Cancer

    PubMed Central

    Clarke, Charlotte H.; Yip, Christine; Badgwell, Donna; Fung, Eric T.; Coombes, Kevin R.; Zhang, Zhen; Lu, Karen H.; Bast, Robert C.

    2011-01-01

    Objective The low prevalence of ovarian cancer demands both high sensitivity (>75%) and specificity (99.6%) to achieve a positive predictive value of 10% for successful early detection. Utilizing a two stage strategy where serum marker(s) prompt the performance of transvaginal sonography (TVS) in a limited number (2%) of women could reduce the requisite specificity for serum markers to 98%. We have attempted to improve sensitivity by combining CA125 with proteomic markers. Methods Sera from 41 patients with early stage (I/II) and 51 with late stage (III/IV) epithelial ovarian cancer, 40 with benign disease and 99 healthy individuals, were analyzed to measure 7 proteins [Apolipoprotein A1 (Apo-A1), truncated transthyretin (TT), transferrin, hepcidin, ß-2-microglobulin (ß2M), Connective Tissue Activating Protein III (CTAPIII), and Inter-alpha-trypsin inhibitor heavy chain 4 (ITIH4)]. Statistical models were fit by logistic regression, followed by optimization of factors retained in the models determined by optimizing the Akaike Information Criterion. A validation set included 136 stage I ovarian cancers, 140 benign pelvic masses and 174 healthy controls. Results In a training set analysis, the 3 most effective biomarkers (Apo-A1, TT and CTAPIII) exhibited 54% sensitivity at 98% specificity, CA125 alone produced 68% sensitivity and the combination increased sensitivity to 88%. In a validation set, the marker panel plus CA125 produced a sensitivity of 84% at 98% specificity (P= 0.015, McNemar's test). Conclusion Combining a panel of proteomic markers with CA125 could provide a first step in a sequential two-stage strategy with TVS for early detection of ovarian cancer. PMID:21708402

  16. The Model Optimization, Uncertainty, and SEnsitivity analysis (MOUSE) toolbox: overview and application

    USDA-ARS?s Scientific Manuscript database

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  17. Visible-light sensitization of TiO2 photocatalysts via wet chemical N-doping for the degradation of dissolved organic compounds in wastewater treatment: a review

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Jia, Baoping; Wang, Qiuze; Dionysiou, Dionysois

    2015-05-01

    Increased pollution of ground and surface water and emerging new micropollutants from a wide variety of industrial, municipal, and agricultural sources has increased demand on the development of innovative new technologies and materials whereby challenges associated with the provision of safe potable water can be addressed. Heterogeneous photocatalysis using visible-light sensitized TiO2 photocatalysts has attracted a lot of attention as it can effectively remove dissolved organic compound in water without generating harmful by-products. On this note, recent progress on visible-light sensitive TiO2 synthesis via wet chemical N-doping method is reviewed. In a typical visible-light sensitive TiO2 preparation via wet chemical methods, the chemical (e.g., N-doping content and states) and morphological properties (e.g., particle size, surface area, and crystal phase) of TiO2 in as-prepared resultants are sensitively dependent on many experimental variables during the synthesis. This has also made it very difficult to provide a universal guidance at this stage with a certainty for each variable of N-doping preparation. Instead of one-factor-at-a-time style investigation, a statistically valid parameter optimization investigation for general optima of photocatalytic activity will be certainly useful. Optimization of the preparation technique is envisaged to be beneficial to many environmental applications, i.e., dissolved organic compounds removal in wastewater treatment.

  18. Pressure sensitive microparticle adhesion through biomimicry of the pollen-stigma interaction.

    PubMed

    Lin, Haisheng; Qu, Zihao; Meredith, J Carson

    2016-03-21

    Many soft biomimetic synthetic adhesives, optimized to support macroscopic masses (∼kg), have been inspired by geckos, insects and other animals. Far less work has investigated bioinspired adhesion that is tuned to micro- and nano-scale sizes and forces. However, such adhesive forces are extremely important in the adhesion of micro- and nanoparticles to surfaces, relevant to a wide range of industrial and biological systems. Pollens, whose adhesion is critical to plant reproduction, are an evolutionary-optimized system for biomimicry to engineer tunable adhesion between particles and micro-patterned soft matter surfaces. In addition, the adhesion of pollen particles is relevant to topics as varied as pollinator ecology, transport of allergens, and atmospheric phenomena. We report the first observation of structurally-derived pressure-sensitive adhesion of a microparticle by using the sunflower pollen and stigma surfaces as a model. This strong, pressure-sensitive adhesion results from interlocking between the pollen's conical spines and the stigma's receptive papillae. Inspired by this behavior, we fabricated synthetic polymeric patterned surfaces that mimic the stigma surface's receptivity to pollen. These soft mimics allow the magnitude of the pressure-sensitive response to be tuned by adjusting the size and spacing of surface features. These results provide an important new insight for soft material adhesion based on bio-inspired principles, namely that ornamented microparticles and micro-patterned surfaces can be designed with complementarity that enable a tunable, pressure-sensitive adhesion on the microparticle size and length scale.

  19. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  20. The VBB SEIS experiment of InSight

    NASA Astrophysics Data System (ADS)

    De Raucourt, Sebastien; Gabsi, Taoufik; Tanguy, Nebut; Mimoun, David; Lognonne, Philippe; Gagnepain-Beyneix, Jeannine; Banerdt, William; Tillier, Sylvain; Hurst, Kenneth

    2012-07-01

    SEIS is the core payload of InSight, one of the three missions selected for competitive phase A in the frame of the 2010 Discovery AO. It aims at providing unique observation of the interior structure of Mars and to monitor seismic activity of Mars. SEIS will provide the first seismic model from another planet than Earth. SEIS is an hybrid seismometer composed of 3 SPs and 3 VBBs axes providing ground motion measurement from Dc to 50Hz. A leveling system will ensure the coupling between the ground and the sensors as well as the horizontality of the VBB sphere. This assembly will be deployed on the ground of Mars and will be shielded by a strong thermal insulation and a wind shield. The 24 bits low noise acquisition electronics will remain in the warm electronic box of the lander with the sensors feedback and leveling system electronics. The VBB sphere enclosed three single axis sensors. Those sensors are based on an inverted leaf spring pendulum, which convert ground acceleration into mobile mass displacement. A capacitive displacement sensor monitors this mass displacement to provide a measurement. A force feedback allows transfer function and sensitivity tuning. The VBB sensor has a very strong heritage from previous project and benefits from recent work to improve its performances. Both the mechanical design and the displacement sensors have optimized to improve performances while reducing technological risk and keeping a high TRL. From those development a self-noise well below 10 ^{-9} m.s ^{-2}/sqrt Hz is expected. Environmental sensitivity of SEIS has been minimized by the design of a very efficient wind and thermal shield. Remaining noise is expected to be very close to the VBB self-noise. Associated sources and budget will be discussed. If InSight is selected to fly in 2016, this experiment will provide very high quality seismic signal measurement with a wider bandwidth, higher sensitivity and lower noise than previous Mars seismometer (Viking and Optimism/Mars 96).

  1. Ultra high performance supercritical fluid chromatography coupled with tandem mass spectrometry for screening of doping agents. I: Investigation of mobile phase and MS conditions.

    PubMed

    Nováková, Lucie; Grand-Guillaume Perrenoud, Alexandre; Nicoli, Raul; Saugy, Martial; Veuthey, Jean-Luc; Guillarme, Davy

    2015-01-01

    The conditions for the analysis of selected doping substances by UHPSFC-MS/MS were optimized to ensure suitable peak shapes and maximized MS responses. A representative mixture of 31 acidic and basic doping agents was analyzed, in both ESI+ and ESI- modes. The best compromise for all compounds in terms of MS sensitivity and chromatographic performance was obtained when adding 2% water and 10mM ammonium formate in the CO2/MeOH mobile phase. Beside mobile phase, the nature of the make-up solvent added for interfacing UHPSFC with MS was also evaluated. Ethanol was found to be the best candidate as it was able to compensate for the negative effect of 2% water addition in ESI- mode and provided a suitable MS response for all doping agents. Sensitivity of the optimized UHPSFC-MS/MS method was finally assessed and compared to the results obtained in conventional UHPLC-MS/MS. Sensitivity was improved by 5-100-fold in UHPSFC-MS/MS vs. UHPLC-MS/MS for 56% of compounds, while only one compound (bumetanide) offered a significantly higher MS response (4-fold) under UHPLC-MS/MS conditions. In the second paper of this series, the optimal conditions for UHPSFC-MS/MS analysis will be employed to screen >100 doping agents in urine matrix and results will be compared to those obtained by conventional UHPLC-MS/MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  3. An enzyme-linked immunosorbent assay for detection of botulinum toxin-antibodies.

    PubMed

    Dressler, Dirk; Gessler, Frank; Tacik, Pawel; Bigalke, Hans

    2014-09-01

    Antibodies against botulinum neurotoxin (BNT-AB) can be detected by the mouse protection assay (MPA), the hemidiaphragm assay (HDA), and by enzyme-linked immunosorbent assays (ELISA). Both MPA and HDA require sacrifice of experimental animals, and they are technically delicate and labor intensive. We introduce a specially developed ELISA for detection of BNT-A-AB and evaluate it against the HDA. Thirty serum samples were tested by HDA and by the new ELISA. Results were compared, and receiver operating characteristic analyses were used to optimize ELISA parameter constellation to obtain either maximal overall accuracy, maximal test sensitivity, or maximal test specificity. When the ELISA is optimized for sensitivity, a sensitivity of 100% and a specificity of 55% can be reached. When it is optimized for specificity, a specificity of 100% and a sensitivity of 90% can be obtained. We present an ELISA for BNT-AB detection that can be-for the first time-customized for special purposes. Adjusted for optimal sensitivity, it reaches the best sensitivity of all BNT-AB tests available. Using the new ELISA together with the HDA as a confirmation test allows testing for BNT-AB in large numbers of patients receiving BT drugs in an economical, fast, and more animal-friendly way. © 2014 International Parkinson and Movement Disorder Society.

  4. External quality assessment studies for laboratory performance of molecular and serological diagnosis of Chikungunya virus infection.

    PubMed

    Jacobsen, Sonja; Patel, Pranav; Schmidt-Chanasit, Jonas; Leparc-Goffart, Isabelle; Teichmann, Anette; Zeller, Herve; Niedrig, Matthias

    2016-03-01

    Since the re-emergence of Chikungunya virus (CHIKV) in Reunion in 2005 and the recent outbreak in the Caribbean islands with an expansion to the Americas the CHIK diagnostic became very important. We evaluate the performance of laboratories regarding molecular and serological diagnostic of CHIK worldwide. A panel of 12 samples for molecular and 13 samples for serology were provided to 60 laboratories in 40 countries for evaluating the sensitivity and specificity of molecular and serology testing. The panel for molecular diagnostic testing was analysed by 56 laboratories returning 60 data sets of results whereas the 56 and 60 data sets were returned for IgG and IgM diagnostic from the participating laboratories. Twenty-three from 60 data sets performed optimal, 7 acceptable and 30 sets of results require improvement. From 50 data sets only one laboratory shows an optimal performance for IgM detection, followed by 9 data sets with acceptable and the rest need for improvement. From 46 IgG serology data sets 20 provide an optimal, 2 an acceptable and 24 require improvement performance. The evaluation of some of the diagnostic performances allows linking the quality of results to the in-house methods or commercial assays used. The external quality assurance for CHIK diagnostics provides a good overview on the laboratory performance regarding sensitivity and specificity for the molecular and serology diagnostic required for the quick and reliable analysis of suspected CHIK patients. Nearly half of the laboratories have to improve their diagnostic profile to achieve a better performance. Copyright © 2016 Z. Published by Elsevier B.V. All rights reserved.

  5. Using linear programming to minimize the cost of nurse personnel.

    PubMed

    Matthews, Charles H

    2005-01-01

    Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.

  6. Beside the Geriatric Depression Scale: the WHO-Five Well-being Index as a valid screening tool for depression in nursing homes.

    PubMed

    Allgaier, Antje-Kathrin; Kramer, Dietmar; Saravo, Barbara; Mergl, Roland; Fejtkova, Sabina; Hegerl, Ulrich

    2013-11-01

    The aim of the study was to compare criterion validities of the WHO-Five Well-being Index (WHO-5) and the Geriatric Depression Scale 15-item version (GDS-15) and 4-item version (GDS-4) as screening instruments for depression in nursing home residents. Data from 92 residents aged 65-97 years without severe cognitive impairment (Mini Mental State Examination ≥15) were analysed. Criterion validities of the WHO-5, the GDS-15 and the GDS-4 were assessed against diagnoses of major and minor depression provided by the Structured Clinical Interview for DSM-IV. Subanalyses were performed for major and minor depression. Areas under the receiver operating curve (AUCs) as well as sensitivities and specificities at optimal cut-off points were computed. Prevalence of depressive disorder was 28.3%. The AUC value of the WHO-5 (0.90) was similar to that of the GDS-15 (0.82). Sensitivity of the WHO-5 (0.92) at its optimal cut-off of ≤12 was significantly higher than that of the GDS-15 (0.69) at its optimal cut-off of ≥7. The WHO-5 was equally sensitive for the subgroups of major and minor depression (0.92), whereas the GDS-15 was sensitive only for major depression (0.85), but not for minor depression (0.54). For specificity, there was no significant difference between WHO-5 (0.79) and GDS-15 (0.88), but both instruments outperformed the GDS-4 (0.53). The WHO-5 demonstrated high sensitivity for major and minor depression. Being shorter than the GDS-15 and superior to the GDS-4, the WHO-5 is a promising screening tool that could help physicians improve low recognition rates of depression in nursing home residents. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Optimization of a stand-alone Solar PV-Wind-DG Hybrid System for Distributed Power Generation at Sagar Island

    NASA Astrophysics Data System (ADS)

    Roy, P. C.; Majumder, A.; Chakraborty, N.

    2010-10-01

    An estimation of a stand-alone solar PV and wind hybrid system for distributed power generation has been made based on the resources available at Sagar island, a remote area distant to grid operation. Optimization and sensitivity analysis has been made to evaluate the feasibility and size of the power generation unit. A comparison of the different modes of hybrid system has been studied. It has been estimated that Solar PV-Wind-DG hybrid system provides lesser per unit electricity cost. Capital investment is observed to be lesser when the system run with Wind-DG compared to Solar PV-DG.

  8. Concept design of a disaster response unmanned aerial vehicle for India

    NASA Astrophysics Data System (ADS)

    Vashi, Y.; Jai, U.; Atluri, R.; Sunjii, M.; Kashyap, Y.; Ashok, V.; Khilari, S.; Jain, K.; Aravind Raj, S.

    2017-12-01

    The Indian sub-continent experiences frequent flooding, earthquakes and landslides. During the times of peril, live surveillance of the disaster zone facilitates the disaster agencies in locating and aiding the affected people. For this reason, development of a micro unmanned aerial vehicle (UAV) can be an optimal solution. This article provides a conceptualization of a UAV model that meets the need of the country. A comparison of different aircraft components and their optimization and sensitivity analyses are presented. In the end, this research produces a preliminary design of UAV that can accomplish surveillance and payload dropping missions in disaster affected areas.

  9. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    USDA-ARS?s Scientific Manuscript database

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  10. [Multiplex real-time PCR method for rapid detection of Marburg virus and Ebola virus].

    PubMed

    Yang, Yu; Bai, Lin; Hu, Kong-Xin; Yang, Zhi-Hong; Hu, Jian-Ping; Wang, Jing

    2012-08-01

    Marburg virus and Ebola virus are acute infections with high case fatality rates. A rapid, sensitive detection method was established to detect Marburg virus and Ebola virus by multiplex real-time fluorescence quantitative PCR. Designing primers and Taqman probes from highly conserved sequences of Marburg virus and Ebola virus through whole genome sequences alignment, Taqman probes labeled by FAM and Texas Red, the sensitivity of the multiplex real-time quantitative PCR assay was optimized by evaluating the different concentrations of primers and Probes. We have developed a real-time PCR method with the sensitivity of 30.5 copies/microl for Marburg virus positive plasmid and 28.6 copies/microl for Ebola virus positive plasmids, Japanese encephalitis virus, Yellow fever virus, Dengue virus were using to examine the specificity. The Multiplex real-time PCR assays provide a sensitive, reliable and efficient method to detect Marburg virus and Ebola virus simultaneously.

  11. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  12. [Primary cervical cancer screening].

    PubMed

    Vargas-Hernández, Víctor Manuel; Vargas-Aguilar, Víctor Manuel; Tovar-Rodríguez, José María

    2015-01-01

    Cervico-uterine cancer screening with cytology decrease incidence by more than 50%. The cause of this cancer is the human papilloma virus high risk, and requires a sensitive test to provide sufficient sensitivity and specificity for early detection and greater interval period when the results are negative. The test of the human papilloma virus high risk, is effective and safe because of its excellent sensitivity, negative predictive value and optimal reproducibility, especially when combined with liquid-based cytology or biomarkers with viral load, with higher sensitivity and specificity, by reducing false positives for the detection of cervical intraepithelial neoplasia grade 2 or greater injury, with excellent clinical benefits to cervical cancer screening and related infection of human papilloma virus diseases, is currently the best test for early detection infection of human papillomavirus and the risk of carcinogenesis. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.

  13. Biophysics of Euglena phototaxis

    NASA Astrophysics Data System (ADS)

    Tsang, Alan Cheng Hou; Riedel-Kruse, Ingmar H.

    Phototactic microorganisms usually respond to light stimuli via phototaxis to optimize the process of photosynthesis and avoid photodamage by excessive amount of light. Unicellular phototactic microorganisms such as Euglena gracilis only possesses a single photoreceptor, which highly limits its access to the light in three-dimensional world. However, experiments demonstrated that Euglena responds to light stimuli sensitively and exhibits phototaxis quickly, and it's not well understood how it performs so efficiently. We propose a mathematical model of Euglena's phototaxis that couples the dynamics of Euglena and its phototactic response. This model shows that Euglena exhibits wobbling path under weak ambient light, which is consistent to experimental observation. We show that this wobbling motion can enhance the sensitivity of photoreceptor to signals of small light intensity and provide an efficient mechanism for Euglena to sample light in different directions. We further investigate the optimization of Euglena's phototaxis using different performance metrics, including reorientation time, energy consumption, and swimming efficiency. We characterize the tradeoff among these performance metrics and the best strategy for phototaxis.

  14. Evaluating the effectiveness of various biochars as porous media for biodiesel synthesis via pseudo-catalytic transesterification.

    PubMed

    Lee, Jechan; Jung, Jong-Min; Oh, Jeong-Ik; Ok, Yong Sik; Lee, Sang-Ryong; Kwon, Eilhann E

    2017-05-01

    This study focuses on investigating the optimized chemical composition of biochar used as porous material for biodiesel synthesis via pseudo-catalytic transesterification. To this end, six biochars from different sources were prepared and biodiesel yield obtained from pseudo-catalytic transesterification of waste cooking oil using six biochars were measured. Biodiesel yield and optimal reaction temperature for pseudo-catalytic transesterification were strongly dependent on the raw material of biochar. For example, biochar generated from maize residue exhibited the best performance, which yield was reached ∼90% at 300°C; however, the maximum biodiesel yield with pine cone biochar was 43% at 380°C. The maximum achievable yield of biodiesel was sensitive to the lignin content of biomass source of biochar but not sensitive to the cellulose and hemicellulose content. This study provides an insight for screening the most effective biochar as pseudo-catalytic porous material, thereby helping develop more sustainable and economically viable biodiesel synthesis process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Using of multi-walled carbon nanotubes electrode for adsorptive stripping voltammetric determination of ultratrace levels of RDX explosive in the environmental samples.

    PubMed

    Rezaei, Behzad; Damiri, Sajjad

    2010-11-15

    A study of the electrochemical behavior and determination of RDX, a high explosive, is described on a multi-walled carbon nanotubes (MWCNTs) modified glassy carbon electrode (GCE) using adsorptive stripping voltammetry and electrochemical impedance spectroscopy (EIS) techniques. The results indicated that MWCNTs electrode remarkably enhances the sensitivity of the voltammetric method and provides measurements of this explosive down to the sub-mg/l level in a wide pH range. The operational parameters were optimized and a sensitive, simple and time-saving cyclic voltammetric procedure was developed for the analysis of RDX in ground and tap water samples. Under optimized conditions, the reduction peak have two linear dynamic ranges of 0.6-20.0 and 8.0-200.0 mM with a detection limit of 25.0 nM and a precision of <4% (RSD for 8 analysis). Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Impact of uncertain head tissue conductivity in the optimization of transcranial direct current stimulation for an auditory target

    NASA Astrophysics Data System (ADS)

    Schmidt, Christian; Wagner, Sven; Burger, Martin; van Rienen, Ursula; Wolters, Carsten H.

    2015-08-01

    Objective. Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique to modify neural excitability. Using multi-array tDCS, we investigate the influence of inter-individually varying head tissue conductivity profiles on optimal electrode configurations for an auditory cortex stimulation. Approach. In order to quantify the uncertainty of the optimal electrode configurations, multi-variate generalized polynomial chaos expansions of the model solutions are used based on uncertain conductivity profiles of the compartments skin, skull, gray matter, and white matter. Stochastic measures, probability density functions, and sensitivity of the quantities of interest are investigated for each electrode and the current density at the target with the resulting stimulation protocols visualized on the head surface. Main results. We demonstrate that the optimized stimulation protocols are only comprised of a few active electrodes, with tolerable deviations in the stimulation amplitude of the anode. However, large deviations in the order of the uncertainty in the conductivity profiles could be noted in the stimulation protocol of the compensating cathodes. Regarding these main stimulation electrodes, the stimulation protocol was most sensitive to uncertainty in skull conductivity. Finally, the probability that the current density amplitude in the auditory cortex target region is supra-threshold was below 50%. Significance. The results suggest that an uncertain conductivity profile in computational models of tDCS can have a substantial influence on the prediction of optimal stimulation protocols for stimulation of the auditory cortex. The investigations carried out in this study present a possibility to predict the probability of providing a therapeutic effect with an optimized electrode system for future auditory clinical and experimental procedures of tDCS applications.

  17. Parameter sensitivity analysis and optimization for a satellite-based evapotranspiration model across multiple sites using Moderate Resolution Imaging Spectroradiometer and flux data

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li

    2017-01-01

    Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.

  18. Sensitive and Selective NH₃ Monitoring at Room Temperature Using ZnO Ceramic Nanofibers Decorated with Poly(styrene sulfonate).

    PubMed

    Andre, Rafaela S; Kwak, Dongwook; Dong, Qiuchen; Zhong, Wei; Correa, Daniel S; Mattoso, Luiz H C; Lei, Yu

    2018-04-01

    Ammonia (NH₃) gas is a prominent air pollutant that is frequently found in industrial and livestock production environments. Due to the importance in controlling pollution and protecting public health, the development of new platforms for sensing NH₃ at room temperature has attracted great attention. In this study, a sensitive NH₃ gas device with enhanced selectivity is developed based on zinc oxide nanofibers (ZnO NFs) decorated with poly(styrene sulfonate) (PSS) and operated at room temperature. ZnO NFs were prepared by electrospinning followed by calcination at 500 °C for 3 h. The electrospun ZnO NFs are characterized to evaluate the properties of the as-prepared sensing materials. The loading of PSS to prepare ZnO NFs/PSS composite is also optimized based on the best sensing performance. Under the optimal composition, ZnO NFs/PSS displays rapid, reversible, and sensitive response upon NH₃ exposure at room temperature. The device shows a dynamic linear range up to 100 ppm and a limit of detection of 3.22 ppm and enhanced selectivity toward NH₃ in synthetic air, against NO₂ and CO, compared to pure ZnO NFs. Additionally, a sensing mechanism is proposed to illustrate the sensing performance using ZnO NFs/PSS composite. Therefore, this study provides a simple methodology to design a sensitive platform for NH₃ monitoring at room temperature.

  19. A practical and highly sensitive C3N4-TYR fluorescent probe for convenient detection of dopamine

    NASA Astrophysics Data System (ADS)

    Li, Hao; Yang, Manman; Liu, Juan; Zhang, Yalin; Yang, Yanmei; Huang, Hui; Liu, Yang; Kang, Zhenhui

    2015-07-01

    The C3N4-tyrosinase (TYR) hybrid is a highly accurate, sensitive and simple fluorescent probe for the detection of dopamine (DOPA). Under optimized conditions, the relative fluorescence intensity of C3N4-TYR is proportional to the DOPA concentration in the range from 1 × 10-3 to 3 × 10-8 mol L-1 with a correlation coefficient of 0.995. In the present system, the detection limit achieved is as low as 3 × 10-8 mol L-1. Notably, these quantitative detection results for clinical samples are comparable to those of high performance liquid chromatography. Moreover, the enzyme-encapsulated C3N4 sensing arrays on both glass slide and test paper were evaluated, which revealed sensitive detection and excellent stability. The results reported here provide a new approach for the design of a multifunctional nanosensor for the detection of bio-molecules.The C3N4-tyrosinase (TYR) hybrid is a highly accurate, sensitive and simple fluorescent probe for the detection of dopamine (DOPA). Under optimized conditions, the relative fluorescence intensity of C3N4-TYR is proportional to the DOPA concentration in the range from 1 × 10-3 to 3 × 10-8 mol L-1 with a correlation coefficient of 0.995. In the present system, the detection limit achieved is as low as 3 × 10-8 mol L-1. Notably, these quantitative detection results for clinical samples are comparable to those of high performance liquid chromatography. Moreover, the enzyme-encapsulated C3N4 sensing arrays on both glass slide and test paper were evaluated, which revealed sensitive detection and excellent stability. The results reported here provide a new approach for the design of a multifunctional nanosensor for the detection of bio-molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03316k

  20. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.

  1. Sensitivity regularization of the Cramér-Rao lower bound to minimize B1 nonuniformity effects in quantitative magnetization transfer imaging.

    PubMed

    Boudreau, Mathieu; Pike, G Bruce

    2018-05-07

    To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  3. High-sensitivity operation of single-beam optically pumped magnetometer in a kHz frequency range

    DOE PAGES

    Savukov, Igor Mykhaylovich; Kim, Y. J.; Shah, V.; ...

    2017-02-02

    Here, optically pumped magnetometers (OPM) can be used in various applications, from magnetoencephalography to magnetic resonance imaging and nuclear quadrupole resonance (NQR). OPMs provide high sensitivity and have the significant advantage of non-cryogenic operation. To date, many magnetometers have been demonstrated with sensitivity close to 1 fT, but most devices are not commercialized. Most recently, QuSpin developed a model of OPM that is low cost, high sensitivity, and convenient for users, which operates in a single-beam configuration. Here we developed a theory of single-beam (or parallel two-beam) magnetometers and showed that it is possible to achieve good sensitivity beyond theirmore » usual frequency range by tuning the magnetic field. Experimentally we have tested and optimized a QuSpin OPM for operation in the frequency range from DC to 1.7 kHz, and found that the performance was only slightly inferior despite the expected decrease due to deviation from the spin-exchange relaxation-free regime.« less

  4. High-sensitivity operation of single-beam optically pumped magnetometer in a kHz frequency range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savukov, Igor Mykhaylovich; Kim, Y. J.; Shah, V.

    Here, optically pumped magnetometers (OPM) can be used in various applications, from magnetoencephalography to magnetic resonance imaging and nuclear quadrupole resonance (NQR). OPMs provide high sensitivity and have the significant advantage of non-cryogenic operation. To date, many magnetometers have been demonstrated with sensitivity close to 1 fT, but most devices are not commercialized. Most recently, QuSpin developed a model of OPM that is low cost, high sensitivity, and convenient for users, which operates in a single-beam configuration. Here we developed a theory of single-beam (or parallel two-beam) magnetometers and showed that it is possible to achieve good sensitivity beyond theirmore » usual frequency range by tuning the magnetic field. Experimentally we have tested and optimized a QuSpin OPM for operation in the frequency range from DC to 1.7 kHz, and found that the performance was only slightly inferior despite the expected decrease due to deviation from the spin-exchange relaxation-free regime.« less

  5. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  6. Optimization of the Photoanode of CdS Quantum Dot-Sensitized Solar Cells Using Light-Scattering TiO2 Hollow Spheres

    NASA Astrophysics Data System (ADS)

    Marandi, Maziar; Rahmani, Elham; Ahangarani Farahani, Farzaneh

    2017-12-01

    CdS quantum dot-sensitized solar cells (QDSCs) have been fabricated and their photoanode optimized by altering the thickness of the photoelectrode and CdS deposition conditions and applying a ZnS electron-blocking layer and TiO2 hollow spheres. Hydrothermally grown TiO2 nanocrystals (NCs) with dominant size of 20 nm were deposited as a sublayer in the photoanode with thickness in the range from 5 μm to 10 μm using a successive ionic layer adsorption and reaction (SILAR) method. The number of deposition cycles was altered over a wide range to obtain optimized sensitization. Photoanode thickness and number of CdS sensitization cycles around the optimum values were selected and used for ZnS deposition. ZnS overlayers were also deposited on the surface of the photoanodes using different numbers of cycles of the SILAR process. The best QDSC with the optimized photoelectrode demonstrated a 153% increase in efficiency compared with a similar cell with ZnS-free photoanode. Such bilayer photoelectrodes were also fabricated with different thicknesses of TiO2 sublayers and one overlayer of TiO2 hollow spheres (HSs) with external diameter of 500 nm fabricated by liquid-phase deposition with carbon spheres as template. The optimization was performed by changing the photoanode thickness using a wide range of CdS sensitizing cycles. The maximum energy conversion efficiency was increased by about 77% compared with a similar cell with HS-free photoelectrode. The reason was considered to be the longer path length of the incident light inside the photoanode and greater light absorption. A ZnS blocking layer was overcoated on the surface of the bilayer photoanode with optimized thickness. The number of CdS sensitization cycles was also changed around the optimized value to obtain the best QDSC performance. The number of ZnS deposition cycles was also altered in a wide range for optimization of the photovoltaic performance. It was shown that the maximum efficiency was increased by about 55% compared with a similar QDSC with ZnS-free bilayer photoanode. The final improvement was carried out by applying methanol-based Cd precursor solution in the SILAR deposition process. The best photoanodes from the previous stages were selected and used in this sensitizing process. Besides, nanocrystalline TiO2 sublayers with different thicknesses were applied for further optimization. The results revealed that maximum power conversion efficiency of 3.7% was achieved as a result of such improvement, for a QDSC with optimized double-layer photoanode including TiO2 HSs and NCs and ZnS blocking layer.

  7. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  8. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  9. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  10. Optimal dynamic pricing for deteriorating items with reference-price effects

    NASA Astrophysics Data System (ADS)

    Xue, Musen; Tang, Wansheng; Zhang, Jianxiong

    2016-07-01

    In this paper, a dynamic pricing problem for deteriorating items with the consumers' reference-price effect is studied. An optimal control model is established to maximise the total profit, where the demand not only depends on the current price, but also is sensitive to the historical price. The continuous-time dynamic optimal pricing strategy with reference-price effect is obtained through solving the optimal control model on the basis of Pontryagin's maximum principle. In addition, numerical simulations and sensitivity analysis are carried out. Finally, some managerial suggestions that firm may adopt to formulate its pricing policy are proposed.

  11. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  12. Pricing policy for declining demand using item preservation technology.

    PubMed

    Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav

    2016-01-01

    We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.

  13. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  14. Optimization of proximity ligation assay (PLA) for detection of protein interactions and fusion proteins in non-adherent cells: application to pre-B lymphocytes.

    PubMed

    Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère

    2017-01-01

    Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.

  15. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    PubMed Central

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62–87.77%, decrease delay by 21.09–52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs. PMID:29751589

  16. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.

    PubMed

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian

    2018-05-03

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62⁻87.77%, decrease delay by 21.09⁻52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs.

  17. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  18. Quantitative real-time in vivo detection of magnetic nanoparticles by their nonlinear magnetization

    NASA Astrophysics Data System (ADS)

    Nikitin, M. P.; Torno, M.; Chen, H.; Rosengart, A.; Nikitin, P. I.

    2008-04-01

    A novel method of highly sensitive quantitative detection of magnetic nanoparticles (MP) in biological tissues and blood system has been realized and tested in real time in vivo experiments. The detection method is based on nonlinear magnetic properties of MP and the related device can record a very small relative variation of nonlinear magnetic susceptibility up to 10-8 at room temperature, providing sensitivity of several nanograms of MP in 0.1ml volume. Real-time quantitative in vivo measurements of dynamics of MP concentration in blood flow have been performed. A catheter that carried the blood flow of a rat passed through the measuring device. After an MP injection, the quantity of MP in the circulating blood was continuously recorded. The method has also been used to evaluate the MP distribution between rat's organs. Its sensitivity was compared with detection of the radioactive MP based on isotope of Fe59. The comparison of magnetic and radioactive signals in the rat's blood and organ samples demonstrated similar sensitivity for both methods. However, the proposed magnetic method is much more convenient as it is safe, less expensive, and provides real-time measurements in vivo. Moreover, the sensitivity of the method can be further improved by optimization of the device geometry.

  19. Optimization of the imaging response of scanning microwave microscopy measurements

    NASA Astrophysics Data System (ADS)

    Sardi, G. M.; Lucibello, A.; Kasper, M.; Gramse, G.; Proietti, E.; Kienberger, F.; Marcelli, R.

    2015-07-01

    In this work, we present the analytical modeling and preliminary experimental results for the choice of the optimal frequencies when performing amplitude and phase measurements with a scanning microwave microscope. In particular, the analysis is related to the reflection mode operation of the instrument, i.e., the acquisition of the complex reflection coefficient data, usually referred as S11. The studied configuration is composed of an atomic force microscope with a microwave matched nanometric cantilever probe tip, connected by a λ/2 coaxial cable resonator to a vector network analyzer. The set-up is provided by Keysight Technologies. As a peculiar result, the optimal frequencies, where the maximum sensitivity is achieved, are different for the amplitude and for the phase signals. The analysis is focused on measurements of dielectric samples, like semiconductor devices, textile pieces, and biological specimens.

  20. Optimization of Aerospace Structure Subject to Damage Tolerance Criteria

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.

    1999-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  1. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  2. A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1974-01-01

    The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.

  3. Selecting the Parameters of the Orientation Engine for a Technological Spacecraft

    NASA Astrophysics Data System (ADS)

    Belousov, A. I.; Sedelnikov, A. V.

    2018-01-01

    This work provides a solution to the issues of providing favorable conditions for carrying out gravitationally sensitive technological processes on board a spacecraft. It is noted that an important role is played by the optimal choice of the orientation system of the spacecraft and the main parameters of the propulsion system as the most important executive organ of the system of orientation and control of the orbital motion of the spacecraft. Advantages and disadvantages of two different orientation systems are considered. One of them assumes the periodic impulsive inclusion of a low thrust liquid rocket engines, the other is based on the continuous operation of the executing elements. A conclusion is drawn on the need to take into account the composition of gravitationally sensitive processes when choosing the orientation system of the spacecraft.

  4. Highly broad-specific and sensitive enzyme-linked immunosorbent assay for screening sulfonamides: Assay optimization and application to milk samples

    USDA-ARS?s Scientific Manuscript database

    A broad-specific and sensitive immunoassay for the detection of sulfonamides was developed by optimizing the conditions of an enzyme-linked immunosorbent assay (ELISA) in regard to different monoclonal antibodies (MAbs), assay format, immunoreagents, and several physicochemical factors (pH, salt, de...

  5. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  6. Tools for groundwater protection planning: An example from McHenry County, Illinois, USA

    USGS Publications Warehouse

    Berg, R.C.; Curry, B. Brandon; Olshansky, R.

    1999-01-01

    This paper presents an approach for producing aquifer sensitivity maps from three-dimensional geologic maps, called stack-unit maps. Stack-unit maps depict the succession of geologic materials to a given depth, and aquifer sensitivity maps interpret the successions according to their ability to transmit potential contaminants. Using McHenry County, Illinois, as a case study, stack-unit maps and an aquifer sensitivity assessment were made to help land-use planners, public health officials, consultants, developers, and the public make informed decisions regarding land use. A map of aquifer sensitivity is important for planning because the county is one of the fastest growing counties in the nation, and highly vulnerable sand and gravel aquifers occur within 6 m of ground surface over 75% of its area. The aquifer sensitivity map can provide guidance to regulators seeking optimal protection of groundwater resources where these resources are particularly vulnerable. In addition, the map can be used to help officials direct waste-disposal and industrial facilities and other sensitive land-use practices to areas where the least damage is likely to occur, thereby reducing potential future liabilities.

  7. Application of environmental sensitivity theories in personalized prevention for youth substance abuse: a transdisciplinary translational perspective.

    PubMed

    Thibodeau, Eric L; August, Gerald J; Cicchetti, Dante; Symons, Frank J

    2016-03-01

    Preventive interventions that target high-risk youth, via one-size-fits-all approaches, have demonstrated modest effects in reducing rates of substance use. Recently, substance use researchers have recommended personalized intervention strategies. Central to these approaches is matching preventatives to characteristics of an individual that have been shown to predict outcomes. One compelling body of literature on person × environment interactions is that of environmental sensitivity theories, including differential susceptibility theory and vantage sensitivity. Recent experimental evidence has demonstrated that environmental sensitivity (ES) factors moderate substance abuse outcomes. We propose that ES factors may augment current personalization strategies such as matching based on risk factors/severity of problem behaviors (risk severity (RS)). Specifically, individuals most sensitive to environmental influence may be those most responsive to intervention in general and thus need only a brief-type or lower-intensity program to show gains, while those least sensitive may require more comprehensive or intensive programming for optimal responsiveness. We provide an example from ongoing research to illustrate how ES factors can be incorporated into prevention trials aimed at high-risk adolescents.

  8. Geometrical optimization of a local ballistic magnetic sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  9. Rapid, sensitive and direct analysis of exopolysaccharides from biofilm on aluminum surfaces exposed to sea water using MALDI-TOF MS.

    PubMed

    Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen

    2011-11-01

    Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    NASA Astrophysics Data System (ADS)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  11. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  12. Evaluation of microplate immunocapture method for detection of Vibrio cholerae, Salmonella Typhi and Shigella flexneri from food.

    PubMed

    Fakruddin, Md; Hossain, Md Nur; Ahmed, Monzur Morshed

    2017-08-29

    Improved methods with better separation and concentration ability for detection of foodborne pathogens are in constant need. The aim of this study was to evaluate microplate immunocapture (IC) method for detection of Salmonella Typhi, Shigella flexneri and Vibrio cholerae from food samples to provide a better alternative to conventional culture based methods. The IC method was optimized for incubation time, bacterial concentration, and capture efficiency. 6 h incubation and log 6 CFU/ml cell concentration provided optimal results. The method was shown to be highly specific for the pathogens concerned. Capture efficiency (CE) was around 100% of the target pathogens, whereas CE was either zero or very low for non-target pathogens. The IC method also showed better pathogen detection ability at different concentrations of cells from artificially contaminated food samples in comparison with culture based methods. Performance parameter of the method was also comparable (Detection limit- 25 CFU/25 g; sensitivity 100%; specificity-96.8%; Accuracy-96.7%), even better than culture based methods (Detection limit- 125 CFU/25 g; sensitivity 95.9%; specificity-97%; Accuracy-96.2%). The IC method poses to be the potential to be used as a method of choice for detection of foodborne pathogens in routine laboratory practice after proper validation.

  13. Dual-Modality PET/Ultrasound imaging of the Prostate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Jennifer S.; Moses, William W.; Pouliot, Jean

    2005-11-11

    Functional imaging with positron emission tomography (PET)will detect malignant tumors in the prostate and/or prostate bed, as well as possibly help determine tumor ''aggressiveness''. However, the relative uptake in a prostate tumor can be so great that few other anatomical landmarks are visible in a PET image. Ultrasound imaging with a transrectal probe provides anatomical detail in the prostate region that can be co-registered with the sensitive functional information from the PET imaging. Imaging the prostate with both PET and transrectal ultrasound (TRUS) will help determine the location of any cancer within the prostate region. This dual-modality imaging should helpmore » provide better detection and treatment of prostate cancer. LBNL has built a high performance positron emission tomograph optimized to image the prostate.Compared to a standard whole-body PET camera, our prostate-optimized PET camera has the same sensitivity and resolution, less backgrounds and lower cost. We plan to develop the hardware and software tools needed for a validated dual PET/TRUS prostate imaging system. We also plan to develop dual prostate imaging with PET and external transabdominal ultrasound, in case the TRUS system is too uncomfortable for some patients. We present the design and intended clinical uses for these dual imaging systems.« less

  14. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  15. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  16. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  17. Enhanced Sensitivity of a Surface Acoustic Wave Gyroscope

    NASA Astrophysics Data System (ADS)

    Zhang, Yanhua; Wang, Wen

    2009-10-01

    In this paper, we present an optimal design and performance evaluation of a surface acoustic wave (SAW) gyroscope. It consists of a two-port SAW resonator (SAWR) and a SAW sensor (SAWS) structured using a delay line pattern. The SAW resonator provides a stable reference vibration and creates a standing wave, and the vibrating metallic dot array at antinodes of the standing wave induces the second SAW in the normal direction by the Coriolis force, and the SAW sensor is used to detect the secondary SAW. By using the coupling of modes (COM), the SAW resonator was simulated, and the effects of the design parameters on the frequency response of the device were investigated. Also, a theoretical analysis was performed to investigate the effect of metallic dots on the frequency response of the SAW device. The measured frequency response S21 of the fabricated 80 MHz two-port SAW resonator agrees well with the simulated result, that is, a low insertion loss (˜5 dB) and a single steep resonance peak were observed. In the gyroscopic experiments using a rate table, optimal metallic dot thickness was determined, and the sensitivity of the fabricated SAW gyroscope with an optimal metallic dot thickness of ˜350 nm was determined to be 3.2 µV deg-1 s-1.

  18. Computational study for optimization of a plasmon FET as a molecular biosensor

    NASA Astrophysics Data System (ADS)

    Ciappesoni, Mark; Cho, Seongman; Tian, Jieyuan; Kim, Sung Jin

    2018-02-01

    Surface Plasmon Resonance (SPR) is currently being widely studied as it exhibits sensitive optical properties to changes in in the refractive index of the surrounding medium. As novel devices using SPR have been developing rapidly there is a necessity to develop models and simulation environments that will allow for continued development and optimization of these devices. A biological sensing device of interest is the Plasmon FET which has been proven experimentally to have a limit of detection (LOD) of 20pg/ml while being immune to the absorption of the medium. The Plasmon FET is a metal-semiconductor-metal detector which employ functionalized gold nanostructures on a semi-conducting layer. This direct approach has the advantages of not requiring readout optics reducing size and allowing for point-of -care measurements. Using Lumerical FDTD and Device numerical solvers, we can report an advanced simulation environment illustrating several key sensor specifications including LOD, resolution, sensitivity, and dynamic range, for a variety of biological markers providing a comprehensive analysis of a Direct Plasmon-to-Electric conversion device designed to function with colored mediums (eg.whole blood). This model allows for the simulation and optimization of a plasmonic sensor that already o ers advantages in size, operability, and multiplexing-capability, with real time monitoring.

  19. Capillary Electrophoresis Analysis of Organic Amines and Amino Acids in Saline and Acidic Samples Using the Mars Organic Analyzer

    NASA Astrophysics Data System (ADS)

    Stockton, Amanda M.; Chiesl, Thomas N.; Lowenstein, Tim K.; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A.

    2009-11-01

    The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pKa values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the Río Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.

  20. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  1. Capillary electrophoresis analysis of organic amines and amino acids in saline and acidic samples using the Mars organic analyzer.

    PubMed

    Stockton, Amanda M; Chiesl, Thomas N; Lowenstein, Tim K; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A

    2009-11-01

    The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pK(a) values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the Río Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.

  2. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.

    2017-10-01

    A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.

  4. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  5. Flexible modulation of risk attitude during decision-making under quota.

    PubMed

    Fujimoto, Atsushi; Takahashi, Hidehiko

    2016-10-01

    Risk attitude is often regarded as an intrinsic parameter in the individual personality. However, ethological studies reported state-dependent strategy optimization irrespective of individual preference. To synthesize the two contrasting literatures, we developed a novel gambling task that dynamically manipulated the quota severity (required outcome to clear the task) in a course of choice trials and conducted a task-fMRI study in human participants. The participants showed their individual risk preference when they had no quota constraint ('individual-preference mode'), while they adopted state-dependent optimal strategy when they needed to achieve a quota ('strategy-optimization mode'). fMRI analyses illustrated that the interplay among prefrontal areas and salience-network areas reflected the quota severity and the utilization of the optimal strategy, shedding light on the neural substrates of the quota-dependent risk attitude. Our results demonstrated the complex nature of risk-sensitive decision-making and may provide a new perspective for the understanding of problematic risky behaviors in human. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    NASA Astrophysics Data System (ADS)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  7. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  8. Optimization of a vacuum chamber for vibration measurements.

    PubMed

    Danyluk, Mike; Dhingra, Anoop

    2011-10-01

    A 200 °C high vacuum chamber has been built to improve vibration measurement sensitivity. The optimized design addresses two significant issues: (i) vibration measurements under high vacuum conditions and (ii) use of design optimization tools to reduce operating costs. A test rig consisting of a cylindrical vessel with one access port has been constructed with a welded-bellows assembly used to seal the vessel and enable vibration measurements in high vacuum that are comparable with measurements in air. The welded-bellows assembly provides a force transmissibility of 0.1 or better at 15 Hz excitation under high vacuum conditions. Numerical results based on design optimization of a larger diameter chamber are presented. The general constraints on the new design include material yield stress, chamber first natural frequency, vibration isolation performance, and forced convection heat transfer capabilities over the exterior of the vessel access ports. Operating costs of the new chamber are reduced by 50% compared to a preexisting chamber of similar size and function.

  9. Targeted drug delivery and enhanced intracellular release using functionalized liposomes

    NASA Astrophysics Data System (ADS)

    Garg, Ashish

    The ability to target cancer cells using an appropriate drug delivery system can significantly reduce the associated side effects from cancer therapies and can help in improving the overall quality of life, post cancer survival. Integrin alpha5beta1 is expressed on several types of cancer cells, including colon cancer and plays an important role in tumor growth and metastasis. Thus, the ability to target the integrin alpha 5beta1 using an appropriate drug delivery nano-vector can significantly help in inhibiting tumor growth and reducing tumor metastasis. The work in this thesis focuses on designing and optimizing, functionalized stealth liposomes (liposomes covered with polyethylene glycol (PEG)) that specifically target the integrin alpha5beta1. The PEG provides a steric barrier allowing the liposomes to circulate in the blood for longer duration and the functionalizing moiety, PR_b peptide specifically recognizes and binds to integrin alpha5beta1 expressing cells. The work demonstrates that by optimizing the amount of PEG and PR_b on the liposomal interface, nano-vectors can be engineered that bind to CT26.WT colon cancer cells in a specific manner and internalize through alpha 5beta1-mediated endocytosis. To further improve the efficacy of the system, PR_b functionalized pH-sensitive stealth liposomes that exhibit triggered release under mild acidic conditions present in endocytotic vesicles were designed. The study showed that PR_b functionalized pH-sensitive stealth liposomes, undergo destabilization under mildly acidic conditions and incorporation of the PR_b peptide does not significantly affect the pH-sensitivity of the liposomes. PR_b functionalized pH-sensitive stealth liposomes bind to CT26.WT colon carcinoma cells that express integrin alpha5beta 1, undergo cellular internalization, and release their load intracellularly in a short period of time as compared to other formulations. PR_b-targeted pH-sensitive stealth liposomes encapsulating 5-fluorouracil (5-FU) show significantly higher cytotoxicity than the PR_b-targeted inert stealth liposomes and the non-targeted stealth liposomes (both pH-sensitive and inert). The studies demonstrated that optimized PR_b functionalized pH sensitive liposomes have the potential to deliver a payload, such as chemotherapeutic agents, directly to colon cancer cells in an efficient and specific manner.

  10. Optimization and application of atmospheric pressure chemical and photoionization hydrogen-deuterium exchange mass spectrometry for speciation of oxygen-containing compounds.

    PubMed

    Acter, Thamina; Kim, Donghwi; Ahmed, Arif; Jin, Jang Mi; Yim, Un Hyuk; Shim, Won Joon; Kim, Young Hwan; Kim, Sunghwan

    2016-05-01

    This paper presents a detailed investigation of the feasibility of optimized positive and negative atmospheric pressure chemical ionization (APCI) mass spectrometry (MS) and atmospheric pressure photoionization (APPI) MS coupled to hydrogen-deuterium exchange (HDX) for structural assignment of diverse oxygen-containing compounds. The important parameters for optimization of HDX MS were characterized. The optimized techniques employed in the positive and negative modes showed satisfactory HDX product ions for the model compounds when dichloromethane and toluene were employed as a co-solvent in APCI- and APPI-HDX, respectively. The evaluation of the mass spectra obtained from 38 oxygen-containing compounds demonstrated that the extent of the HDX of the ions was structure-dependent. The combination of information provided by different ionization techniques could be used for better speciation of oxygen-containing compounds. For example, (+) APPI-HDX is sensitive to compounds with alcohol, ketone, or aldehyde substituents, while (-) APPI-HDX is sensitive to compounds with carboxylic functional groups. In addition, the compounds with alcohol can be distinguished from other compounds by the presence of exchanged peaks. The combined information was applied to study chemical compositions of degraded oils. The HDX pattern, double bond equivalent (DBE) distribution, and previously reported oxidation products were combined to predict structures of the compounds produced from oxidation of oil. Overall, this study shows that APCI- and APPI-HDX MS are useful experimental techniques that can be applied for the structural analysis of oxygen-containing compounds.

  11. HLA Mismatching Strategies for Solid Organ Transplantation – A Balancing Act

    PubMed Central

    Zachary, Andrea A.; Leffell, Mary S.

    2016-01-01

    HLA matching provides numerous benefits in organ transplantation including better graft function, fewer rejection episodes, longer graft survival, and the possibility of reduced immunosuppression. Mismatches are attended by more frequent rejection episodes that require increased immunosuppression that, in turn, can increase the risk of infection and malignancy. HLA mismatches also incur the risk of sensitization, which can reduce the opportunity and increase waiting time for a subsequent transplant. However, other factors such as donor age, donor type, and immunosuppression protocol, can affect the benefit derived from matching. Furthermore, finding a well-matched donor may not be possible for all patients and usually prolongs waiting time. Strategies to optimize transplantation for patients without a well-matched donor should take into account the immunologic barrier represented by different mismatches: what are the least immunogenic mismatches considering the patient’s HLA phenotype; should repeated mismatches be avoided; is the patient sensitized to HLA and, if so, what are the strengths of the patient’s antibodies? This information can then be used to define the HLA type of an immunologically optimal donor and the probability of such a donor occurring. A probability that is considered to be too low may require expanding the donor population through paired donation or modifying what is acceptable, which may require employing treatment to overcome immunologic barriers such as increased immunosuppression or desensitization. Thus, transplantation must strike a balance between the risk associated with waiting for the optimal donor and the risk associated with a less than optimal donor. PMID:28003816

  12. Optimal cut-off of homeostasis model assessment of insulin resistance (HOMA-IR) for the diagnosis of metabolic syndrome: third national surveillance of risk factors of non-communicable diseases in Iran (SuRFNCD-2007).

    PubMed

    Esteghamati, Alireza; Ashraf, Haleh; Khalilzadeh, Omid; Zandieh, Ali; Nakhjavani, Manouchehr; Rashidi, Armin; Haghazali, Mehrdad; Asgari, Fereshteh

    2010-04-07

    We have recently determined the optimal cut-off of the homeostatic model assessment of insulin resistance for the diagnosis of insulin resistance (IR) and metabolic syndrome (MetS) in non-diabetic residents of Tehran, the capital of Iran. The aim of the present study is to establish the optimal cut-off at the national level in the Iranian population with and without diabetes. Data of the third National Surveillance of Risk Factors of Non-Communicable Diseases, available for 3,071 adult Iranian individuals aging 25-64 years were analyzed. MetS was defined according to the Adult Treatment Panel III (ATPIII) and International Diabetes Federation (IDF) criteria. HOMA-IR cut-offs from the 50th to the 95th percentile were calculated and sensitivity, specificity, and positive likelihood ratio for MetS diagnosis were determined. The receiver operating characteristic (ROC) curves of HOMA-IR for MetS diagnosis were depicted, and the optimal cut-offs were determined by two different methods: Youden index, and the shortest distance from the top left corner of the curve. The area under the curve (AUC) (95%CI) was 0.650 (0.631-0.670) for IDF-defined MetS and 0.683 (0.664-0.703) with the ATPIII definition. The optimal HOMA-IR cut-off for the diagnosis of IDF- and ATPIII-defined MetS in non-diabetic individuals was 1.775 (sensitivity: 57.3%, specificity: 65.3%, with ATPIII; sensitivity: 55.9%, specificity: 64.7%, with IDF). The optimal cut-offs in diabetic individuals were 3.875 (sensitivity: 49.7%, specificity: 69.6%) and 4.325 (sensitivity: 45.4%, specificity: 69.0%) for ATPIII- and IDF-defined MetS, respectively. We determined the optimal HOMA-IR cut-off points for the diagnosis of MetS in the Iranian population with and without diabetes.

  13. Optimizing the loss of one-dimensional photonic crystal towards high-sensitivity Bloch-surface-wave sensors under intensity interrogation scheme

    NASA Astrophysics Data System (ADS)

    Kong, Weijing; Wan, Yuhang; Du, Kun; Zhao, Wenhui; Wang, Shuang; Zheng, Zheng

    2016-11-01

    The reflected intensity change of the Bloch-surface-wave (BSW) resonance influenced by the loss of a truncated onedimensional photonic crystal structure is numerically analyzed and studied in order to enhance the sensitivity of the Bloch-surface-wave-based sensors. The finite truncated one-dimensional photonic crystal structure is designed to be able to excite BSW mode for water (n=1.33) as the external medium and for p-polarized plane wave incident light. The intensity interrogation scheme which can be operated on a typical Kretschmann prism-coupling configuration by measuring the reflected intensity change of the resonance dip is investigated to optimize the sensitivity. A figure of merit (FOM) is introduced to measure the performance of the one-dimensional photonic crystal multilayer structure under the scheme. The detection sensitivities are calculated under different device parameters with a refractive index change corresponding to different solutions of glycerol in de-ionized (DI)-water. The results show that the intensity sensitivity curve varies similarly with the FOM curve and the sensitivity of the Bloch-surface-wave sensor is greatly affected by the device loss, where an optimized loss value can be got. For the low-loss BSW devices, the intensity interrogation sensing sensitivity may drop sharply from the optimal value. On the other hand, the performance of the detection scheme is less affected by the higher device loss. This observation is in accordance with BSW experimental sensing demonstrations as well. The results obtained could be useful for improving the performance of the Bloch-surface-wave sensors for the investigated sensing scheme.

  14. Comparison of four methods to assess colostral IgG concentration in dairy cows.

    PubMed

    Chigerwe, Munashe; Tyler, Jeff W; Middleton, John R; Spain, James N; Dill, Jeffrey S; Steevens, Barry J

    2008-09-01

    To determine sensitivity and specificity of 4 methods to assess colostral IgG concentration in dairy cows and determine the optimal cutpoint for each method. Cross-sectional study. 160 Holstein dairy cows. 171 composite colostrum samples collected within 2 hours after parturition were used in the study. Test methods used to estimate colostral IgG concentration consisted of weight of the first milking, 2 hydrometers, and an electronic refractometer. Results of the test methods were compared with colostral IgG concentration determined by means of radial immunodiffusion. For each method, sensitivity and specificity for detecting colostral IgG concentration < 50 g/L were calculated across a range of potential cutpoints, and the optimal cutpoint for each test was selected to maximize sensitivity and specificity. At the optimal cutpoint for each method, sensitivity for weight of the first milking (0.42) was significantly lower than sensitivity for each of the other 3 methods (hydrometer 1, 0.75; hydrometer 2, 0.76; refractometer, 0.75), but no significant differences were identified among the other 3 methods with regard to sensitivity. Specificities at the optimal cutpoint were similar for all 4 methods. Results suggested that use of either hydrometer or the electronic refractometer was an acceptable method of screening colostrum for low IgG concentration; however, the manufacturer-defined scale for both hydrometers overestimated colostral IgG concentration. Use of weight of the first milking as a screening test to identify bovine colostrum with inadequate IgG concentration could not be justified because of the low sensitivity.

  15. Evaluation of sensitivity and specificity of a standardized procedure using different reagents for the detection of lupus anticoagulants. The Working Group on Hemostasis of the Société Française de Biologie Clinique and for the Groupe d'Etudes sur I'Hémostase et la Thrombose.

    PubMed

    Goudemand, J; Caron, C; De Prost, D; Derlon, A; Borg, J Y; Sampol, J; Sié, P

    1997-02-01

    This study was designed to test the sensitivity and specificity of a combination of 3 phospholipid-dependent assays performed with various reagents, for the detection of lupus anticoagulant (LA). Plasmas containing an LA (n = 56) or displaying various confounding pathologies [58 intrinsic pathway factor deficiencies, 9 factor VIII inhibitors, 28 plasmas from patients treated with an oral anticoagulant (OAC)] were selected. In a first step, the efficiency of each assay and reagent was assessed using the Receiving Operating Characteristic (ROC) method. Optimal cut-offs providing both sensitivity and specificity > or = 80% were determined. The APTT assay and most of the phospholipid neutralization assays failed to discriminate factor VIII inhibitors from LA. In a second step, using the optimal cut-offs determined above, the results of all the possible combinations of the 3 assays performed with 4 different reagents were analyzed. Thirteen combinations of reagents allowed > or = 80% of plasmas of each category (LA, factor deficiency or OAC) to be correctly classified (3/3 positive test results in LA-containing plasmas and 0/3 positive results in LA-negative samples).

  16. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  17. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  18. A review of optimization and quantification techniques for chemical exchange saturation transfer (CEST) MRI toward sensitive in vivo imaging

    PubMed Central

    Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe

    2015-01-01

    Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791

  19. Optimization of pH sensing using silicon nanowire field effect transistors with HfO2 as the sensing surface.

    PubMed

    Zafar, Sufi; D'Emic, Christopher; Afzali, Ali; Fletcher, Benjamin; Zhu, Y; Ning, Tak

    2011-10-07

    Silicon nanowire field effect transistor sensors with SiO(2)/HfO(2) as the gate dielectric sensing surface are fabricated using a top down approach. These sensors are optimized for pH sensing with two key characteristics. First, the pH sensitivity is shown to be independent of buffer concentration. Second, the observed pH sensitivity is enhanced and is equal to the Nernst maximum sensitivity limit of 59 mV/pH with a corresponding subthreshold drain current change of ∼ 650%/pH. These two enhanced pH sensing characteristics are attributed to the use of HfO(2) as the sensing surface and an optimized fabrication process compatible with silicon processing technology.

  20. Optimal Magnetic Sensor Vests for Cardiac Source Imaging

    PubMed Central

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-01-01

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910

  1. Simulations towards optimization of a neutron/anti-neutron oscillation experiment at the European Spallation Source

    NASA Astrophysics Data System (ADS)

    Frost, Matthew; Kamyshkov, Yuri; Castellanos, Luis; Klinkby, Esben; US NNbar Collaboration

    2015-04-01

    The observation of Neutron/Anti-neutron oscillation would prove the existence of Baryon Number Violation (BNV), and thus an explanation for the dominance of matter over anti-matter in the universe. The latest experiments have shown the oscillation time to be greater than 8.6 x 107 seconds, whereas current theoretical predictions suggest times on the order of 108 to 109 seconds. A neutron oscillation experiment proposed at the European Spallation Source (ESS) would provide sensitivity of more than 1000 times previous experiments performed, thus providing a result well-suited to confirm or deny current theory. A conceptual design of the proposed experiment will be presented, as well as the optimization of key experiment components using Monte-Carlo simulation methods, including the McStas neutron ray-trace simulation package. This work is supported by the Organized Research Units Program funded by The University of Tennessee, Knoxville Office of Research and Engagement.

  2. A nanocluster-based fluorescent sensor for sensitive hemoglobin detection.

    PubMed

    Yang, Dongqin; Meng, Huijie; Tu, Yifeng; Yan, Jilin

    2017-08-01

    In this report, a fluorescence sensor for sensitive detection of hemoglobin was developed. Gold nanoclusters were first synthesized with bovine serum albumin. It was found that both hydrogen peroxide and hemoglobin could weakly quench the fluorescence from the gold nanoclusters, but when these two were applied onto the nanolcusters simultaneously, a much improved quenching was resulted. This enhancing effect was proved to come from the catalytic generation of hydroxyl radical by hemoglobin. Under an optimized condition, the quenching linearly related to the concentration of hemoglobin in the range of 1-250nM, and a limit of detection as low as 0.36nM could be obtained. This provided a sensitive means for the quantification of Hb. The sensor was then successfully applied for blood analyses with simple sample pretreatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  4. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  5. Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine

    PubMed Central

    Yuan, Hua; Huang, Jianping; Cao, Chenzhong

    2009-01-01

    Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136

  6. Graphene-bimetal plasmonic platform for ultra-sensitive biosensing

    NASA Astrophysics Data System (ADS)

    Tong, Jinguang; Jiang, Li; Chen, Huifang; Wang, Yiqin; Yong, Ken-Tye; Forsberg, Erik; He, Sailing

    2018-03-01

    A graphene-bimetal plasmonic platform for surface plasmon resonance biosensing with ultra-high sensitivity was proposed and optimized. In this hybrid configuration, graphene nanosheets was employed to effectively absorb the excitation light and serve as biomolecular recognition elements for increased adsorption of analytes. Coating of an additional Au film prevents oxidation of the Ag substrate during manufacturing process and enhances the sensitivity at the same time. Thus, a bimetal Au-Ag substrate enables improved sensing performance and promotes stability of this plasmonic sensor. In this work we optimized the number of graphene layers as well as the thickness of the Au film and the Ag substrate based on the phase-interrogation sensitivity. We found an optimized configuration consisting of 6 layers of graphene coated on a bimetal surface consisting of a 5 nm Au film and a 30 nm Ag film. The calculation results showed the configuration could achieve a phase sensitivity as high as 1 . 71 × 106 deg/RIU, which was more than 2 orders of magnitude higher than that of bimetal structure and graphene-silver structure. Due to this enhanced sensing performance, the graphene-bimetal plasmonic platform proposed in this paper is potential for ultra-sensitive plasmonic sensing.

  7. Sensitivity optimization in whispering gallery mode optical cylindrical biosensors

    NASA Astrophysics Data System (ADS)

    Khozeymeh, F.; Razaghi, M.

    2018-01-01

    Whispering-gallery-mode resonances propagated in cylindrical resonators have two angular and radial orders of l and i. In this work, the higher radial order whispering-gallery-mode resonances, (i = 1 - 4), at a fixed l are examined. The sensitivity of theses resonances is analysed as a function of the structural parameters of the cylindrical resonator like different radii and refractive index of composed material of the resonator. A practical application where cylindrical resonators are used for the measurement of glucose concentration in water is presented as a biosensor demonstrator. We calculate the wavelength shifts of the WG1-4, in several glucose/water solutions, with concentrations spanning from 0.0% to 9.0.% (weight/weight). Improved sensitivity can be achieved using multi-WGM cylindrical resonators with radius of R = 100 μm and resonator composed material of MgF 2 with refractive index of nc = 1.38. Also the effect of polarization on sensitivity is considered for all four WGMs. The best sensitivity of 83.07 nm/RIU for the fourth WGM with transverse magnetic polarization, is reported. These results propose optimized parameters aimed to fast designing of cylindrical resonators as optical biosensors, where both the sensitivity and the geometries can be optimized.

  8. Optimization under Uncertainty of a Biomass-integrated Renewable Energy Microgrid with Energy Storage

    NASA Astrophysics Data System (ADS)

    Zheng, Yingying

    The growing energy demands and needs for reducing carbon emissions call more and more attention to the development of renewable energy technologies and management strategies. Microgrids have been developed around the world as a means to address the high penetration level of renewable generation and reduce greenhouse gas emissions while attempting to address supply-demand balancing at a more local level. This dissertation presents a model developed to optimize the design of a biomass-integrated renewable energy microgrid employing combined heat and power with energy storage. A receding horizon optimization with Monte Carlo simulation were used to evaluate optimal microgrid design and dispatch under uncertainties in the renewable energy and utility grid energy supplies, the energy demands, and the economic assumptions so as to generate a probability density function for the cost of energy. Case studies were examined for a conceptual utility grid-connected microgrid application in Davis, California. The results provide the most cost effective design based on the assumed energy load profile, local climate data, utility tariff structure, and technical and financial performance of the various components of the microgrid. Sensitivity and uncertainty analyses are carried out to illuminate the key parameters that influence the energy costs. The model application provides a means to determine major risk factors associated with alternative design integration and operating strategies.

  9. Effects of loss on the phase sensitivity with parity detection in an SU(1,1) interferometer

    NASA Astrophysics Data System (ADS)

    Li, Dong; Yuan, Chun-Hua; Yao, Yao; Jiang, Wei; Li, Mo; Zhang, Weiping

    2018-05-01

    We theoretically study the effects of loss on the phase sensitivity of an SU(1,1) interferometer with parity detection with various input states. We show that although the sensitivity of phase estimation decreases in the presence of loss, it can still beat the shot-noise limit with small loss. To examine the performance of parity detection, the comparison is performed among homodyne detection, intensity detection, and parity detection. Compared with homodyne detection and intensity detection, parity detection has a slight better optimal phase sensitivity in the absence of loss, but has a worse optimal phase sensitivity with a significant amount of loss with one-coherent state or coherent $\\otimes$ squeezed state input.

  10. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  11. Optimization of Planet Finder Observing Strategy

    NASA Astrophysics Data System (ADS)

    Sinukoff, E.

    2014-03-01

    We evaluate radial velocity observing strategies to be considered for future planethunting surveys with the Automated Planet Finder, a new 2.4-m telescope at Lick Observatory. Observing strategies can be optimized to mitigate stellar noise, which can mask and imitate the weak Doppler signals of low-mass planets. We estimate and compare sensitivities of 5 different observing strategies to planets around G2-M2 dwarfs, constructing RV noise models for each stellar spectral type, accounting for acoustic, granulation, and magnetic activity modes. The strategies differ in exposure time, nightly and monthly cadence, and number of years. Synthetic RV time-series are produced by injecting a planet signal onto the stellar noise, sampled according to each observing strategy. For each star and each observing strategy, thousands of planet injection recovery trials are conducted to determine the detection efficiency as a function of orbital period, minimum mass, and eccentricity. We find that 4-year observing strategies of 10 nights per month are sensitive to planets ~25-40% lower in mass than the corresponding 1 year strategies of 30 nights per month. Three 5-minute exposures spaced evenly throughout each night provide a 10% gain in sensitivity over the corresponding single 15-minute exposure strategies. All strategies are sensitive to planets of lowest mass around the modeled K7 dwarf. This study indicates that APF surveys adopting the 4-year strategies should detect Earth-mass planets on < 10-day orbits around quiet late-K dwarfs as well as > 1.6 Earth-mass planets in their habitable zones.

  12. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  13. Design of a residential microgrid in Lagos del Cacique, Bucaramanga, Colombia

    NASA Astrophysics Data System (ADS)

    Bellon, D.; González Estrada, O. A.; Martínez, A.

    2017-12-01

    In this paper is presented a model that analyses the options to provide energy to an interconnected house in Lagos del Cacique, Bucaramanga, Colombia. Three power supplies were considered: photovoltaic, 1 kW wind turbine, and a 2.6kW gasoline generator, as well as a battery for energy storage. The variables considered for the sensitivity analysis correspond to the price of gasoline and the variation in loads. The simulation results suggest an optimal configuration of microgrids in generator-photovoltaic panel-battery. Sensitivity variables were specified in order to evaluate the effect of uncertainty. The simulation was done through the Homer software and the results of the combinations of sources are suggestions of the same.

  14. Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.

    2013-02-01

    Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.

  15. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  16. Performance evaluation and optimization of multiband phase-modulated radio over IsOWC link with balanced coherent homodyne detection

    NASA Astrophysics Data System (ADS)

    Zong, Kang; Zhu, Jiang

    2018-04-01

    In this paper, we present a multiband phase-modulated (PM) radio over intersatellite optical wireless communication (IsOWC) link with balanced coherent homodyne detection. The proposed system can provide the transparent transport of multiband radio frequency (RF) signals with higher linearity and better receiver sensitivity than intensity modulated with direct detection (IM/DD) system. The expressions of RF gain, noise figure (NF) and third-order spurious-free dynamic range (SFDR) are derived considering the third-order intermodulation product and amplifier spontaneous emission (ASE) noise. The optimal power of local oscillator (LO) optical signal is also derived theoretically. Numerical results for RF gain, NF and third-order SFDR are given for demonstration. Results indicate that the gain of the optical preamplifier and the power of LO optical signal should be optimized to obtain the satisfactory performance.

  17. Optimal reconstruction of historical water supply to a distribution system: A. Methodology.

    PubMed

    Aral, M M; Guan, J; Maslia, M L; Sautner, J B; Gillig, R E; Reyes, J J; Williams, R C

    2004-09-01

    The New Jersey Department of Health and Senior Services (NJDHSS), with support from the Agency for Toxic Substances and Disease Registry (ATSDR) conducted an epidemiological study of childhood leukaemia and nervous system cancers that occurred in the period 1979 through 1996 in Dover Township, Ocean County, New Jersey. The epidemiological study explored a wide variety of possible risk factors, including environmental exposures. ATSDR and NJDHSS determined that completed human exposure pathways to groundwater contaminants occurred in the past through private and community water supplies (i.e. the water distribution system serving the area). To investigate this exposure, a model of the water distribution system was developed and calibrated through an extensive field investigation. The components of this water distribution system, such as number of pipes, number of tanks, and number of supply wells in the network, changed significantly over a 35-year period (1962--1996), the time frame established for the epidemiological study. Data on the historical management of this system was limited. Thus, it was necessary to investigate alternative ways to reconstruct the operation of the system and test the sensitivity of the system to various alternative operations. Manual reconstruction of the historical water supply to the system in order to provide this sensitivity analysis was time-consuming and labour intensive, given the complexity of the system and the time constraints imposed on the study. To address these issues, the problem was formulated as an optimization problem, where it was assumed that the water distribution system was operated in an optimum manner at all times to satisfy the constraints in the system. The solution to the optimization problem provided the historical water supply strategy in a consistent manner for each month of the study period. The non-uniqueness of the selected historical water supply strategy was addressed by the formulation of a second model, which was based on the first solution. Numerous other sensitivity analyses were also conducted using these two models. Both models are solved using a two-stage progressive optimality algorithm along with genetic algorithms (GAs) and the EPANET2 water distribution network solver. This process reduced the required solution time and generated a historically consistent water supply strategy for the water distribution system.

  18. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  19. Issues and Strategies in Solving Multidisciplinary Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya

    2013-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  20. Wavefront-guided versus wavefront-optimized laser in situ keratomileusis: contralateral comparative study.

    PubMed

    Padmanabhan, Prema; Mrochen, Michael; Basuthkar, Subam; Viswanathan, Deepa; Joseph, Roy

    2008-03-01

    To compare the outcomes of wavefront-guided and wavefront-optimized treatment in fellow eyes of patients having laser in situ keratomileusis (LASIK) for myopia. Medical and Vision Research Foundation, Tamil Nadu, India. This prospective comparative study comprised 27 patients who had wavefront-guided LASIK in 1 eye and wavefront-optimized LASIK in the fellow eye. The Hansatome (Bausch & Lomb) was used to create a superior-hinged flap and the Allegretto laser (WaveLight Laser Technologie AG), for photoablation. The Allegretto wave analyzer was used to measure ocular wavefront aberrations and the Functional Acuity Contrast Test chart, to measure contrast sensitivity before and 1 month after LASIK. The refractive and visual outcomes and the changes in aberrations and contrast sensitivity were compared between the 2 treatment modalities. One month postoperatively, 92% of eyes in the wavefront-guided group and 85% in the wavefront-optimized group had uncorrected visual acuity of 20/20 or better; 93% and 89%, respectively, had a postoperative spherical equivalent refraction of +/-0.50 diopter. The differences between groups were not statistically significant. Wavefront-guided LASIK induced less change in 18 of 22 higher-order Zernike terms than wavefront-optimized LASIK, with the change in positive spherical aberration the only statistically significant one (P= .01). Contrast sensitivity improved at the low and middle spatial frequencies (not statistically significant) and worsened significantly at high spatial frequencies after wavefront-guided LASIK; there was a statistically significant worsening at all spatial frequencies after wavefront-optimized LASIK. Although both wavefront-guided and wavefront-optimized LASIK gave excellent refractive correction results, the former induced less higher-order aberrations and was associated with better contrast sensitivity.

  1. Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno

    2017-01-01

    This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.

  2. Tip/tilt optimizations for polynomial apodized vortex coronagraphs on obscured telescope pupils

    NASA Astrophysics Data System (ADS)

    Fogarty, Kevin; Pueyo, Laurent; Mazoyer, Johan; N'Diaye, Mamadou

    2017-09-01

    Obstructions due to large secondary mirrors, primary mirror segmentation, and secondary mirror support struts all introduce diffraction artifacts that limit the performance offered by coronagraphs. However, just as vortex coronagraphs provides theoretically ideal cancellation of on-axis starlight for clear apertures, the Polynomial Apodized Vortex Coronagraph (PAVC) completely blocks on-axis light for apertures with central obscurations, and delivers off-axis throughput that improves as the topological charge of the vortex increases. We examine the sensitivity of PAVC designs to tip/tilt aberrations and stellar angular size, and discuss methods for mitigating these effects. By imposing additional constraints on the pupil plane apodization, we decrease the sensitivity of the PAVC to the small positional shifts of the on-axis source induced by either tip/tilt or stellar angular size; providing a route to overcoming an important hurdle facing the performance of vortex coronagraphs on telescopes with complicated pupils.

  3. Standoff detection: distinction of bacteria by hyperspectral laser induced fluorescence

    NASA Astrophysics Data System (ADS)

    Walter, Arne; Duschek, Frank; Fellner, Lea; Grünewald, Karin M.; Hausmann, Anita; Julich, Sandra; Pargmann, Carsten; Tomaso, Herbert; Handke, Jürgen

    2016-05-01

    Sensitive detection and rapid identification of hazardous bioorganic material with high sensitivity and specificity are essential topics for defense and security. A single method can hardly cover these requirements. While point sensors allow a highly specific identification, they only provide localized information and are comparatively slow. Laser based standoff systems allow almost real-time detection and classification of potentially hazardous material in a wide area and can provide information on how the aerosol may spread. The coupling of both methods may be a promising solution to optimize the acquisition and identification of hazardous substances. The capability of the outdoor LIF system at DLR Lampoldshausen test facility as an online classification tool has already been demonstrated. Here, we present promising data for further differentiation among bacteria. Bacteria species can express unique fluorescence spectra after excitation at 280 nm and 355 nm. Upon deactivation, the spectral features change depending on the deactivation method.

  4. The electrophotonic silicon biosensor

    NASA Astrophysics Data System (ADS)

    Juan-Colás, José; Parkin, Alison; Dunn, Katherine E.; Scullion, Mark G.; Krauss, Thomas F.; Johnson, Steven D.

    2016-09-01

    The emergence of personalized and stratified medicine requires label-free, low-cost diagnostic technology capable of monitoring multiple disease biomarkers in parallel. Silicon photonic biosensors combine high-sensitivity analysis with scalable, low-cost manufacturing, but they tend to measure only a single biomarker and provide no information about their (bio)chemical activity. Here we introduce an electrochemical silicon photonic sensor capable of highly sensitive and multiparameter profiling of biomarkers. Our electrophotonic technology consists of microring resonators optimally n-doped to support high Q resonances alongside electrochemical processes in situ. The inclusion of electrochemical control enables site-selective immobilization of different biomolecules on individual microrings within a sensor array. The combination of photonic and electrochemical characterization also provides additional quantitative information and unique insight into chemical reactivity that is unavailable with photonic detection alone. By exploiting both the photonic and the electrical properties of silicon, the sensor opens new modalities for sensing on the microscale.

  5. Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations

    PubMed Central

    Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán

    2016-01-01

    Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165

  6. Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation

    PubMed Central

    Budde, Matthew D.; Skinner, Nathan P.; Muftuler, L. Tugan; Schmit, Brian D.; Kurpad, Shekar N.

    2017-01-01

    Diffusion tensor imaging (DTI) is a promising biomarker of spinal cord injury (SCI). In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE) have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE) approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV) excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI. Overall, the results and optimizations describe a protocol that mitigates several difficulties with DTI of the spinal cord. Detection of acute axonal damage in the injured or diseased spinal cord will benefit the optimized filter-probe diffusion MRI protocol outlined here. PMID:29311786

  7. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  8. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  9. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system.. Shennan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this (Akgun et al., 1998b). SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  10. Collagen gel droplet-embedded culture drug sensitivity testing in squamous cell carcinoma cell lines derived from human oral cancers: Optimal contact concentrations of cisplatin and fluorouracil.

    PubMed

    Sakuma, Kaname; Tanaka, Akira; Mataga, Izumi

    2016-12-01

    The collagen gel droplet-embedded culture drug sensitivity test (CD-DST) is an anticancer drug sensitivity test that uses a method of three-dimensional culture of extremely small samples, and it is suited to primary cultures of human cancer cells. It is a useful method for oral squamous cell carcinoma (OSCC), in which the cancer tissues available for testing are limited. However, since the optimal contact concentrations of anticancer drugs have yet to be established in OSCC, CD-DST for detecting drug sensitivities of OSCC is currently performed by applying the optimal contact concentrations for stomach cancer. In the present study, squamous carcinoma cell lines from human oral cancer were used to investigate the optimal contact concentrations of cisplatin (CDDP) and fluorouracil (5-FU) during CD-DST for OSCC. CD-DST was performed in 7 squamous cell carcinoma cell lines derived from human oral cancers (Ca9-22, HSC-3, HSC-4, HO-1-N-1, KON, OSC-19 and SAS) using CDDP (0.15, 0.3, 1.25, 2.5, 5.0 and 10.0 µg/ml) and 5-FU (0.4, 0.9, 1.8, 3.8, 7.5, 15.0 and 30.0 µg/ml), and the optimal contact concentrations were calculated from the clinical response rate of OSCC to single-drug treatment and the in vitro efficacy rate curve. The optimal concentrations were 0.5 µg/ml for CDDP and 0.7 µg/ml for 5-FU. The antitumor efficacy of CDDP at this optimal contact concentration in CD-DST was compared to the antitumor efficacy in the nude mouse method. The T/C values, which were calculated as the ratio of the colony volume of the treatment group and the colony volume of the control group, at the optimal contact concentration of CDDP and of the nude mouse method were almost in agreement (P<0.05) and predicted clinical efficacy, indicating that the calculated optimal contact concentration is valid. Therefore, chemotherapy for OSCC based on anticancer drug sensitivity tests offers patients a greater freedom of choice and is likely to assume a greater importance in the selection of treatment from the perspectives of function preservation and quality of life, as well as representing a treatment option for unresectable, intractable or recurrent cases.

  11. Population level differences in thermal sensitivity of energy assimilation in terrestrial salamanders.

    PubMed

    Clay, Timothy A; Gifford, Matthew E

    2017-02-01

    Thermal adaptation predicts that thermal sensitivity of physiological traits should be optimized to thermal conditions most frequently experienced. Furthermore, thermodynamic constraints predict that species with higher thermal optima should have higher performance maxima and narrower performance breadths. We tested these predictions by examining the thermal sensitivity of energy assimilation between populations within two species of terrestrial-lungless salamanders, Plethodon albagula and P. montanus. Within P. albagula, we examined populations that were latitudinally separated by >450km. Within P. montanus, we examined populations that were elevationally separated by >900m. Thermal sensitivity of energy assimilation varied substantially between populations of P. albagula separated latitudinally, but did not vary between populations of P. montanus separated elevationally. Specifically, in P. albagula, the lower latitude population had a higher thermal optimum, higher maximal performance, and narrower performance breadth compared to the higher latitude population. Furthermore, across all individuals as thermal optima increased, performance maxima also increased, providing support for the theory that "hotter is better". Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Wearable and Highly Sensitive Graphene Strain Sensor for Precise Home-Based Pulse Wave Monitoring.

    PubMed

    Yang, Tingting; Jiang, Xin; Zhong, Yujia; Zhao, Xuanliang; Lin, Shuyuan; Li, Jing; Li, Xinming; Xu, Jianlong; Li, Zhihong; Zhu, Hongwei

    2017-07-28

    Profuse medical information about cardiovascular properties can be gathered from pulse waveforms. Therefore, it is desirable to design a smart pulse monitoring device to achieve noninvasive and real-time acquisition of cardiovascular parameters. The majority of current pulse sensors are usually bulky or insufficient in sensitivity. In this work, a graphene-based skin-like sensor is explored for pulse wave sensing with features of easy use and wearing comfort. Moreover, the adjustment of the substrate stiffness and interfacial bonding accomplish the optimal balance between sensor linearity and signal sensitivity, as well as measurement of the beat-to-beat radial arterial pulse. Compared with the existing bulky and nonportable clinical instruments, this highly sensitive and soft sensing patch not only provides primary sensor interface to human skin, but also can objectively and accurately detect the subtle pulse signal variations in a real-time fashion, such as pulse waveforms with different ages, pre- and post-exercise, thus presenting a promising solution to home-based pulse monitoring.

  13. Decision Tree based Prediction and Rule Induction for Groundwater Trichloroethene (TCE) Pollution Vulnerability

    NASA Astrophysics Data System (ADS)

    Park, J.; Yoo, K.

    2013-12-01

    For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.

  14. Enhanced Sensitivity of Wireless Chemical Sensor Based on Love Wave Mode

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Oh, Haekwan; Lee, Keekeun; Yang, Sangsik

    2008-09-01

    A 440 MHz wireless and passive Love-wave-based chemical sensor was developed for CO2 detection. The developed device was composed of a reflective delay line patterned on 41° YX LiNbO3 piezoelectric substrate, a poly(methyl methacrylate) (PMMA) waveguide layer, and Teflon AF 2400 sensitive film. A theoretical model is presented to describe wave propagation in Love wave devices with large piezoelectricity and to allow the design of an optimized structure. In wireless device testing using a network analyzer, infusion of CO2 into the testing chamber induced large phase shifts of the reflection peaks owing to the interaction between the sensing film and the test gas (CO2). Good linearity and repeatability were observed at CO2 concentrations of 0-350 ppm. The obtained sensitivity from the Love wave device was approximately 7.07° ppm-1. The gas response properties of the fabricated Love-wave sensor in terms of linearity and sensitivity were provided, and a comparison to surface acoustic wave devices was also discussed.

  15. Sensitivity analysis of reactive ecological dynamics.

    PubMed

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  16. All-fiber Mach-Zehnder interferometer for tunable two quasi-continuous points' temperature sensing in seawater.

    PubMed

    Liu, Tianqi; Wang, Jing; Liao, Yipeng; Wang, Xin; Wang, Shanshan

    2018-04-30

    An all-fiber Mach-Zehnder interferometer (MZI) for two quasi-continuous points' temperature sensing in seawater is proposed. Based on the beam propagation theory, transmission spectrum is designed to present two sets of clear and independent interferences. Following this design, MZI is fabricated and two points' temperature sensing in seawater are demonstrated with sensitivities of 42.69pm/°C and 39.17pm/°C, respectively. By further optimization, sensitivity of 80.91pm/°C can be obtained, which is 3-10 times higher than fiber Bragg gratings and microfiber resonator, and higher than almost all similar MZI based temperature sensors. In addition, factors affecting sensitivities are also discussed and verified in experiment. The two points' temperature sensing demonstrated here show advantages of simple and compact construction, robust structure, easy fabrication, high sensitivity, immunity to salinity and tunable distance of 1-20 centimeters between two points, which may provide references for macroscopic oceanic research and other sensing applications based on MZIs.

  17. Point-of-care test for cervical cancer in LMICs.

    PubMed

    Mohammed, Sulma I; Ren, Wen; Flowers, Lisa; Rajwa, Bartek; Chibwesha, Carla J; Parham, Groesbeck P; Irudayaraj, Joseph M K

    2016-04-05

    Cervical cancer screening using Papanicolaou's smear test has been highly effective in reducing death from this disease. However, this test is unaffordable in low- and middle-income countries, and its complexity has limited wide-scale uptake. Alternative tests, such as visual inspection with acetic acid or Lugol's iodine and human papillomavirus DNA, are sub-optimal in terms of specificity and sensitivity, thus sensitive and affordable tests with high specificity for on-site reporting are needed. Using proteomics and bioinformatics, we have identified valosin-containing protein (VCP) as differentially expressed between normal specimens and those with cervical intra-epithelial neoplasia grade 2/3 (CIN2/CIN3+) or worse. VCP-specific immunohistochemical staining (validated by a point-of-care technology) provided sensitive (93%) and specific (88%) identification of CIN2/CIN3+ and may serve as a critical biomarker for cervical-cancer screening. Future efforts will focus on further refinements to enhance analytic sensitivity and specificity of our proposed test, as well as on prototype development.

  18. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    PubMed Central

    Carvalho, Monica; Lozano, Miguel A.; Ramos, José; Serra, Luis M.

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs. PMID:24453881

  19. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  20. Analysis of pesticides in soy milk combining solid-phase extraction and capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Rodriguez-Delgado, Miguel Angel; García-Montelongo, Francisco J; Cifuentes, Alejandro

    2005-06-01

    In this work, the determination of a group of triazolopyrimidine sulfoanilide herbicides (cloransulam-methyl, metosulam, flumetsulam, florasulam, and diclosulam) in soy milk by capillary electrophoresis-mass spectrometry (CE-MS) is presented. The main electrospray interface (ESI) parameters (nebulizer pressure, dry gas flow rate, dry gas temperature, and composition of the sheath liquid) are optimized using a central composite design. To increase the sensitivity of the CE-MS method, an off-line sample preconcentration procedure based on solid-phase extraction (SPE) is combined with an on-line stacking procedure (i.e. normal stacking mode, NSM). Samples could be injected for up to 100 s, providing limits of detection (LODs) down to 74 microg/L, i.e., at the low ppb level, with relative standard deviation values (RSD,%) between 3.8% and 6.4% for peak areas on the same day, and between 6.5% and 8.1% on three different days. The usefulness of the optimized SPE-NSM-CE-MS procedure is demonstrated through the sensitive quantification of the selected pesticides in soy milk samples.

  1. Synthesis of trigeneration systems: sensitivity analyses and resilience.

    PubMed

    Carvalho, Monica; Lozano, Miguel A; Ramos, José; Serra, Luis M

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  2. Optimization of the coplanar interdigital capacitive sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola

    2017-02-01

    Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.

  3. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Trade-offs in sensitivity and sampling depth in bimodal atomic force microscopy and comparison to the trimodal case

    PubMed Central

    Eslami, Babak; Ebeling, Daniel

    2014-01-01

    Summary This paper presents experiments on Nafion® proton exchange membranes and numerical simulations illustrating the trade-offs between the optimization of compositional contrast and the modulation of tip indentation depth in bimodal atomic force microscopy (AFM). We focus on the original bimodal AFM method, which uses amplitude modulation to acquire the topography through the first cantilever eigenmode, and drives a higher eigenmode in open-loop to perform compositional mapping. This method is attractive due to its relative simplicity, robustness and commercial availability. We show that this technique offers the capability to modulate tip indentation depth, in addition to providing sample topography and material property contrast, although there are important competing effects between the optimization of sensitivity and the control of indentation depth, both of which strongly influence the contrast quality. Furthermore, we demonstrate that the two eigenmodes can be highly coupled in practice, especially when highly repulsive imaging conditions are used. Finally, we also offer a comparison with a previously reported trimodal AFM method, where the above competing effects are minimized. PMID:25161847

  5. Solvothermal Synthesis of Hierarchical TiO2 Microstructures with High Crystallinity and Superior Light Scattering for High-Performance Dye-Sensitized Solar Cells.

    PubMed

    Li, Zhao-Qian; Mo, Li-E; Chen, Wang-Chao; Shi, Xiao-Qiang; Wang, Ning; Hu, Lin-Hua; Hayat, Tasawar; Alsaedi, Ahmed; Dai, Song-Yuan

    2017-09-20

    In this article, hierarchical TiO 2 microstructures (HM-TiO 2 ) were synthesized by a simple solvothermal method adopting tetra-n-butyl titanate as the titanium source in a mixed solvent composed of N,N-dimethylformamide and acetic acid. Due to the high crystallinity and superior light-scattering ability, the resultant HM-TiO 2 are advantageous as photoanodes for dye-sensitized solar cells. When assembled to the entire photovoltaic device with C101 dye as a sensitizer, the pure HM-TiO 2 -based solar cells showed an ultrahigh photovoltage up to 0.853 V. Finally, by employing the as-obtained HM-TiO 2 as the scattering layer and optimizing the architecture of dye-sensitized solar cells, both higher photovoltage and incident photon-to-electron conversion efficiency value were harvested with respect to TiO 2 nanoparticles-based dye-sensitized solar cells, resulting in a high power conversion efficiency of 9.79%. This work provides a promising strategy to develop photoanode materials with outstanding photoelectric conversion performance.

  6. Efficient Gradient-Based Shape Optimization Methodology Using Inviscid/Viscous CFD

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1997-01-01

    The formerly developed preconditioned-biconjugate-gradient (PBCG) solvers for the analysis and the sensitivity equations had resulted in very large error reductions per iteration; quadratic convergence was achieved whenever the solution entered the domain of attraction to the root. Its memory requirement was also lower as compared to a direct inversion solver. However, this memory requirement was high enough to preclude the realistic, high grid-density design of a practical 3D geometry. This limitation served as the impetus to the first-year activity (March 9, 1995 to March 8, 1996). Therefore, the major activity for this period was the development of the low-memory methodology for the discrete-sensitivity-based shape optimization. This was accomplished by solving all the resulting sets of equations using an alternating-direction-implicit (ADI) approach. The results indicated that shape optimization problems which required large numbers of grid points could be resolved with a gradient-based approach. Therefore, to better utilize the computational resources, it was recommended that a number of coarse grid cases, using the PBCG method, should initially be conducted to better define the optimization problem and the design space, and obtain an improved initial shape. Subsequently, a fine grid shape optimization, which necessitates using the ADI method, should be conducted to accurately obtain the final optimized shape. The other activity during this period was the interaction with the members of the Aerodynamic and Aeroacoustic Methods Branch of Langley Research Center during one stage of their investigation to develop an adjoint-variable sensitivity method using the viscous flow equations. This method had algorithmic similarities to the variational sensitivity methods and the control-theory approach. However, unlike the prior studies, it was considered for the three-dimensional, viscous flow equations. The major accomplishment in the second period of this project (March 9, 1996 to March 8, 1997) was the extension of the shape optimization methodology for the Thin-Layer Navier-Stokes equations. Both the Euler-based and the TLNS-based analyses compared with the analyses obtained using the CFL3D code. The sensitivities, again from both levels of the flow equations, also compared very well with the finite-differenced sensitivities. A fairly large set of shape optimization cases were conducted to study a number of issues previously not well understood. The testbed for these cases was the shaping of an arrow wing in Mach 2.4 flow. All the final shapes, obtained either from a coarse-grid-based or a fine-grid-based optimization, using either a Euler-based or a TLNS-based analysis, were all re-analyzed using a fine-grid, TLNS solution for their function evaluations. This allowed for a more fair comparison of their relative merits. From the aerodynamic performance standpoint, the fine-grid TLNS-based optimization produced the best shape, and the fine-grid Euler-based optimization produced the lowest cruise efficiency.

  7. Immune suppression with supraoptimal doses of antigen in contact sensitivity. I. Demonstration of suppressor cells and their sensitivity to cyclophosphamide.

    PubMed

    Sy, M S; Miller, S D; Claman, H N

    1977-07-01

    Immunologic suppression was induced in a mouse model of contact sensitization to DNFB by using supraoptimal doses of antigen. In these studies, in vivo measurement of ear swelling as an indication of immunologic responsiveness correlated well with measurement of in vitro antigen-induced cell proliferation. This unresponsiveness was specific, since supraoptimal doses of DNFB did not interfere with the development of contact sensitivity to another contactant, oxazolone. The decrease in responsiveness is a form of active suppression, as lymphoid cells from supraoptimally sensitized donors transferred suppression to normal recipients. Furthermore, pretreatment with cyclophosphamide (Cy) reversed the suppression seen in supraoptimally sensitized animals but had no effect on the optimal sensitization regimen. These results indicate that supraoptimal doses of contactants can activate suppressor cells and that precursors of these cells are sensitive to Cy. Such suppressors regenerate within 7 to 14 days after Cy treatment. The ability of Cy pretreatment to affect supraoptimal sensitization without affecting optimal sensitization confirms other reports indicating that the observed results of Cy treatment depend critically upon the dose of antigen used.

  8. Lipid-anthropometric index optimization for insulin sensitivity estimation

    NASA Astrophysics Data System (ADS)

    Velásquez, J.; Wong, S.; Encalada, L.; Herrera, H.; Severeyn, E.

    2015-12-01

    Insulin sensitivity (IS) is the ability of cells to react due to insulińs presence; when this ability is diminished, low insulin sensitivity or insulin resistance (IR) is considered. IR had been related to other metabolic disorders as metabolic syndrome (MS), obesity, dyslipidemia and diabetes. IS can be determined using direct or indirect methods. The indirect methods are less accurate and invasive than direct and they use glucose and insulin values from oral glucose tolerance test (OGTT). The accuracy is established by comparison using spearman rank correlation coefficient between direct and indirect method. This paper aims to propose a lipid-anthropometric index which offers acceptable correlation to insulin sensitivity index for different populations (DB1=MS subjects, DB2=sedentary without MS subjects and DB3=marathoners subjects) without to use OGTT glucose and insulin values. The proposed method is parametrically optimized through a random cross-validation, using the spearman rank correlation as comparator with CAUMO method. CAUMO is an indirect method designed from a simplification of the minimal model intravenous glucose tolerance test direct method (MINMOD-IGTT) and with acceptable correlation (0.89). The results show that the proposed optimized method got a better correlation with CAUMO in all populations compared to non-optimized. On the other hand, it was observed that the optimized method has better correlation with CAUMO in DB2 and DB3 groups than HOMA-IR method, which is the most widely used for diagnosing insulin resistance. The optimized propose method could detect incipient insulin resistance, when classify as insulin resistant subjects that present impaired postprandial insulin and glucose values.

  9. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  10. Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun

    2015-10-01

    Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.

  11. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  12. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  13. Evolutionary Design of Controlled Structures

    NASA Technical Reports Server (NTRS)

    Masters, Brett P.; Crawley, Edward F.

    1997-01-01

    Basic physical concepts of structural delay and transmissibility are provided for simple rod and beam structures. Investigations show the sensitivity of these concepts to differing controlled-structures variables, and to rational system modeling effects. An evolutionary controls/structures design method is developed. The basis of the method is an accurate model formulation for dynamic compensator optimization and Genetic Algorithm based updating of sensor/actuator placement and structural attributes. One and three dimensional examples from the literature are used to validate the method. Frequency domain interpretation of these controlled structure systems provide physical insight as to how the objective is optimized and consequently what is important in the objective. Several disturbance rejection type controls-structures systems are optimized for a stellar interferometer spacecraft application. The interferometric designs include closed loop tracking optics. Designs are generated for differing structural aspect ratios, differing disturbance attributes, and differing sensor selections. Physical limitations in achieving performance are given in terms of average system transfer function gains and system phase loss. A spacecraft-like optical interferometry system is investigated experimentally over several different optimized controlled structures configurations. Configurations represent common and not-so-common approaches to mitigating pathlength errors induced by disturbances of two different spectra. Results show that an optimized controlled structure for low frequency broadband disturbances achieves modest performance gains over a mass equivalent regular structure, while an optimized structure for high frequency narrow band disturbances is four times better in terms of root-mean-square pathlength. These results are predictable given the nature of the physical system and the optimization design variables. Fundamental limits on controlled performance are discussed based on the measured and fit average system transfer function gains and system phase loss.

  14. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  15. Research and Infrastructure Development Center for Nanomaterials Research

    DTIC Science & Technology

    2009-05-01

    scale, this technique may prove highly valuable for optimizing the distance dependent energy transfer effects for maximum sensitivity to target...this technique may prove highly valuable for optimizing the distance dependent energy transfer effects for maximum sensitivity 0 20000 40000 60000... Pulsed laser deposition of carbon films on quartz and silicon simply did not work due to their poor conductivity. We found that pyrolized photoresist

  16. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  17. Optimization of a Low Cost and Broadly Sensitive Genotyping Assay for HIV-1 Drug Resistance Surveillance and Monitoring in Resource-Limited Settings

    PubMed Central

    Zhou, Zhiyong; Wagar, Nick; DeVos, Joshua R.; Rottinghaus, Erin; Diallo, Karidia; Nguyen, Duc B.; Bassey, Orji; Ugbena, Richard; Wadonda-Kabondo, Nellie; McConnell, Michelle S.; Zulu, Isaac; Chilima, Benson; Nkengasong, John; Yang, Chunfu

    2011-01-01

    Commercially available HIV-1 drug resistance (HIVDR) genotyping assays are expensive and have limitations in detecting non-B subtypes and circulating recombinant forms that are co-circulating in resource-limited settings (RLS). This study aimed to optimize a low cost and broadly sensitive in-house assay in detecting HIVDR mutations in the protease (PR) and reverse transcriptase (RT) regions of pol gene. The overall plasma genotyping sensitivity was 95.8% (N = 96). Compared to the original in-house assay and two commercially available genotyping systems, TRUGENE® and ViroSeq®, the optimized in-house assay showed a nucleotide sequence concordance of 99.3%, 99.6% and 99.1%, respectively. The optimized in-house assay was more sensitive in detecting mixture bases than the original in-house (N = 87, P<0.001) and TRUGENE® and ViroSeq® assays. When the optimized in-house assay was applied to genotype samples collected for HIVDR surveys (N = 230), all 72 (100%) plasma and 69 (95.8%) of the matched dried blood spots (DBS) in the Vietnam transmitted HIVDR survey were genotyped and nucleotide sequence concordance was 98.8%; Testing of treatment-experienced patient plasmas with viral load (VL) ≥ and <3 log10 copies/ml from the Nigeria and Malawi surveys yielded 100% (N = 46) and 78.6% (N = 14) genotyping rates, respectively. Furthermore, all 18 matched DBS stored at room temperature from the Nigeria survey were genotyped. Phylogenetic analysis of the 236 sequences revealed that 43.6% were CRF01_AE, 25.9% subtype C, 13.1% CRF02_AG, 5.1% subtype G, 4.2% subtype B, 2.5% subtype A, 2.1% each subtype F and unclassifiable, 0.4% each CRF06_CPX, CRF07_BC and CRF09_CPX. Conclusions The optimized in-house assay is broadly sensitive in genotyping HIV-1 group M viral strains and more sensitive than the original in-house, TRUGENE® and ViroSeq® in detecting mixed viral populations. The broad sensitivity and substantial reagent cost saving make this assay more accessible for RLS where HIVDR surveillance is recommended to minimize the development and transmission of HIVDR. PMID:22132237

  18. Optimal Cooling of High Purity Germanium Spectrometers for Missions to Planets and Moons

    NASA Astrophysics Data System (ADS)

    Chernenko, A.; Kostenko, V.; Konev, S.; Rybkin, B.; Paschin, A.; Prokopenko, I.

    2004-04-01

    Gamma-ray spectrometers based on high purity germanium (HPGe) detectors are ultimately sensitive instruments for composition studies of surfaces of planets and moons. However, they require deep cooling well below 120K for the entire duration of space mission, and this challenges the feasibility of such instruments in the era of small and cost-efficient missions. In this paper we summarise our experience in the field of the theoretical and experimental studies of optimal cryogenic cooling of gamma-ray spectrometers based on HPGe detectors in order to find out how efficient, light and compact these instruments could be, provided such technologies like cryogenic heat pipe diodes (HPDs), efficient thermal insulation and efficient miniature cryocoolers are used.

  19. Optimization of SABRE for polarization of the tuberculosis drugs pyrazinamide and isoniazid

    NASA Astrophysics Data System (ADS)

    Zeng, Haifeng; Xu, Jiadi; Gillen, Joseph; McMahon, Michael T.; Artemov, Dmitri; Tyburn, Jean-Max; Lohman, Joost A. B.; Mewis, Ryan E.; Atkinson, Kevin D.; Green, Gary G. R.; Duckett, Simon B.; van Zijl, Peter C. M.

    2013-12-01

    Hyperpolarization produces nuclear spin polarization that is several orders of magnitude larger than that achieved at thermal equilibrium thus providing extraordinary contrast and sensitivity. As a parahydrogen induced polarization (PHIP) technique that does not require chemical modification of the substrate to polarize, Signal Amplification by Reversible Exchange (SABRE) has attracted a lot of attention. Using a prototype parahydrogen polarizer, we polarize two drugs used in the treatment of tuberculosis, namely pyrazinamide and isoniazid. We examine this approach in four solvents, methanol-d4, methanol, ethanol and DMSO and optimize the polarization transfer magnetic field strength, the temperature as well as intensity and duration of hydrogen bubbling to achieve the best overall signal enhancement and hence hyperpolarization level.

  20. Optimization of SABRE for polarization of the tuberculosis drugs pyrazinamide and isoniazid

    PubMed Central

    Zeng, Haifeng; Xu, Jiadi; Gillen, Joseph; McMahon, Michael T.; Artemov, Dmitri; Tyburn, Jean-Max; Lohman, Joost A.B.; Mewis, Ryan E.; Atkinson, Kevin D.; Green, Gary G.R.; Duckett, Simon B.; van Zijl, Peter C.M.

    2013-01-01

    Hyperpolarization produces nuclear spin polarization that is several orders of magnitude larger than that achieved at thermal equilibrium thus providing extraordinary contrast and sensitivity. As a parahydrogen induced polarization (PHIP) technique that does not require chemical modification of the substrate to polarize, Signal Amplification by Reversible Exchange (SABRE) has attracted a lot of attention. Using a prototype parahydrogen polarizer, we polarize two drugs used in the treatment of tuberculosis, namely pyrazinamide and isoniazid. We examine this approach in four solvents, methanol-d4, methanol, ethanol and DMSO and optimize the polarization transfer magnetic field strength, the temperature as well as intensity and duration of hydrogen bubbling to achieve the best overall signal enhancement and hence hyperpolarization level. PMID:24140625

  1. Hollow Waveguide Gas Sensor for Mid-Infrared Trace Gas Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Young, C; Chan, J

    2007-07-12

    A hollow waveguide mid-infrared gas sensor operating from 1000 cm{sup -1} to 4000 cm{sup -1} has been developed, optimized, and its performance characterized by combining a FT-IR spectrometer with Ag/Ag-halide hollow core optical fibers. The hollow core waveguide simultaneously serves as a light guide and miniature gas cell. CH{sub 4} was used as test analyte during exponential dilution experiments for accurate determination of the achievable limit of detection (LOD). It is shown that the optimized integration of an optical gas sensor module with FT-IR spectroscopy provides trace sensitivity at the few hundreds of parts-per-billion concentration range (ppb, v/v) for CH{submore » 4}.« less

  2. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  3. Bifocal Fresnel Lens Based on the Polarization-Sensitive Metasurface

    NASA Astrophysics Data System (ADS)

    Markovich, Hen; Filonov, Dmitrii; Shishkin, Ivan; Ginzburg, Pavel

    2018-05-01

    Thin structured surfaces allow flexible control over propagation of electromagnetic waves. Focusing and polarization state analysis are among functions, required for effective manipulation of radiation. Here a polarization sensitive Fresnel zone plate lens is proposed and experimentally demonstrated for GHz spectral range. Two spatially separated focal spots for orthogonal polarizations are obtained by designing metasurface pattern, made of overlapping tightly packed cross and rod shaped antennas with a strong polarization selectivity. Optimized subwavelength pattern allows multiplexing two different lenses with low polarization crosstalk on the same substrate and provides a control over focal spots of the lens only by changing of the polarization state of the incident wave. More than a wavelength separation between the focal spots was demonstrated for a broad spectral range, covering half a decade in frequency. The proposed concept could be straightforwardly extended for THz and visible spectra, where polarization-sensitive elements utilize localized plasmon resonance phenomenon.

  4. Tailored Algorithm for Sensitivity Enhancement of Gas Concentration Sensors Based on Tunable Laser Absorption Spectroscopy.

    PubMed

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana Dinora; Baeza-Serrato, Roberto

    2018-06-04

    In this work, a novel tailored algorithm to enhance the overall sensitivity of gas concentration sensors based on the Direct Absorption Tunable Laser Absorption Spectroscopy (DA-ATLAS) method is presented. By using this algorithm, the sensor sensitivity can be custom-designed to be quasi constant over a much larger dynamic range compared with that obtained by typical methods based on a single statistics feature of the sensor signal output (peak amplitude, area under the curve, mean or RMS). Additionally, it is shown that with our algorithm, an optimal function can be tailored to get a quasi linear relationship between the concentration and some specific statistics features over a wider dynamic range. In order to test the viability of our algorithm, a basic C 2 H 2 sensor based on DA-ATLAS was implemented, and its experimental measurements support the simulated results provided by our algorithm.

  5. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    PubMed

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  6. Acute toxicity prediction to threatened and endangered ...

    EPA Pesticide Factsheets

    Evaluating contaminant sensitivity of threatened and endangered (listed) species and protectiveness of chemical regulations often depends on toxicity data for commonly tested surrogate species. The U.S. EPA’s Internet application Web-ICE is a suite of Interspecies Correlation Estimation (ICE) models that can extrapolate species sensitivity to listed taxa using least-squares regressions of the sensitivity of a surrogate species and a predicted taxon (species, genus, or family). Web-ICE was expanded with new models that can predict toxicity to over 250 listed species. A case study was used to assess protectiveness of genus and family model estimates derived from either geometric mean or minimum taxa toxicity values for listed species. Models developed from the most sensitive value for each chemical were generally protective of the most sensitive species within predicted taxa, including listed species, and were more protective than geometric means models. ICE model estimates were compared to HC5 values derived from Species Sensitivity Distributions for the case study chemicals to assess protectiveness of the two approaches. ICE models provide robust toxicity predictions and can generate protective toxicity estimates for assessing contaminant risk to listed species. Reporting on the development and optimization of ICE models for listed species toxicity estimation

  7. Efficient PbS/CdS co-sensitized solar cells based on TiO2 nanorod arrays

    PubMed Central

    2013-01-01

    Narrow bandgap PbS nanoparticles, which may expand the light absorption range to the near-infrared region, were deposited on TiO2 nanorod arrays by successive ionic layer adsorption and reaction method to make a photoanode for quantum dot-sensitized solar cells (QDSCs). The thicknesses of PbS nanoparticles were optimized to enhance the photovoltaic performance of PbS QDSCs. A uniform CdS layer was directly coated on previously grown PbS-TiO2 photoanode to protect the PbS from the chemical attack of polysulfide electrolytes. A remarkable short-circuit photocurrent density (approximately 10.4 mA/cm2) for PbS/CdS co-sensitized solar cell was recorded while the photocurrent density of only PbS-sensitized solar cells was lower than 3 mA/cm2. The power conversion efficiency of the PbS/CdS co-sensitized solar cell reached 1.3%, which was beyond the arithmetic addition of the efficiencies of single constituents (PbS and CdS). These results indicate that the synergistic combination of PbS with CdS may provide a stable and effective sensitizer for practical solar cell applications. PMID:23394609

  8. Leuco-crystal-violet micelle gel dosimeters: Component effects on dose-rate dependence

    NASA Astrophysics Data System (ADS)

    Xie, J. C.; Katz, E. A. B.; Alexander, K. M.; Schreiner, L. J.; McAuley, K. B.

    2017-05-01

    Designed experiments were performed to produce empirical models for the dose sensitivity, initial absorbance, and dose-rate dependence respectively for leucocrystal violet (LCV) micelle gel dosimeters containing cetyltrimethylammonium bromide (CTAB) and 2,2,2-trichloroethanol (TCE). Previous gels of this type showed dose-rate dependent behaviour, producing an ˜18% increase in dose sensitivity between dose rates of 100 and 600 cGy min-1. Our models predict that the dose rate dependence can be reduced by increasing the concentration of TCE, CTAB and LCV. Increasing concentrations of LCV and CTAB produces a significant increase in dose sensitivity with a corresponding increase in initial absorbance. An optimization procedure was used to determine a nearly dose-rate independent gel which maintained high sensitivity and low initial absorbance. This gel which contains 33 mM CTAB, 1.25 mM LCV, and 96 mM TCE in 25 mM trichloroacetic acid and 4 wt% gelatin showed an increase in dose sensitivity of only 4% between dose rates of 100 and 600 cGy min-1, and provides an 80% greater dose sensitivity compared to Jordan’s standard gels with similar initial absorbance.

  9. Algorithms for optimization of the transport system in living and artificial cells.

    PubMed

    Melkikh, A V; Sutormina, M I

    2011-06-01

    An optimization of the transport system in a cell has been considered from the viewpoint of the operations research. Algorithms for an optimization of the transport system of a cell in terms of both the efficiency and a weak sensitivity of a cell to environmental changes have been proposed. The switching of various systems of transport is considered as the mechanism of weak sensitivity of a cell to changes in environment. The use of the algorithms for an optimization of a cardiac cell has been considered by way of example. We received theoretically for a cell of a cardiac muscle that at the increase of potassium concentration in the environment switching of transport systems for this ion takes place. This conclusion qualitatively coincides with experiments. The problem of synthesizing an optimal system in an artificial cell has been stated.

  10. Malaria diagnosis and treatment under the strategy of the integrated management of childhood illness (IMCI): relevance of laboratory support from the rapid immunochromatographic tests of ICT Malaria P.f/P.v and OptiMal.

    PubMed

    Tarimo, D S; Minjas, J N; Bygbjerg, I C

    2001-07-01

    The algorithm developed for the integrated management of childhood illness (IMCI) provides guidelines for the treatment of paediatric malaria. In areas where malaria is endemic, for example, the IMCI strategy may indicate that children who present with fever, a recent history of fever and/or pallor should receive antimalarial chemotherapy. In many holo-endemic areas, it is unclear whether laboratory tests to confirm that such signs are the result of malaria would be very relevant or useful. Children from a holo-endemic region of Tanzania were therefore checked for malarial parasites by microscopy and by using two rapid immunochromatographic tests (RIT) for the diagnosis of malaria (ICT Malaria P.f/P.v and OptiMal. At the time they were tested, each of these children had been targeted for antimalarial treatment (following the IMCI strategy) because of fever and/or pallor. Only 70% of the 395 children classified to receive antimalarial drugs by the IMCI algorithm had malarial parasitaemias (68.4% had Plasmodium falciparum trophozoites, 1.3% only P. falciparum gametocytes, 0.3% P. ovale and 0.3% P. malariae). As indicators of P. falciparum trophozoites in the peripheral blood, fever had a sensitivity of 93.0% and a specificity of 15.5% whereas pallor had a sensitivity of 72.2% and a specificity of 50.8%. The RIT both had very high corresponding sensitivities (of 100.0% for the ICT and 94.0% for OptiMal) but the specificity of the ICT (74.0%) was significantly lower than that for OptiMal (100.0%). Fever and pallor were significantly associated with the P. falciparum asexual parasitaemias that equalled or exceeded the threshold intensity (2000/microl) that has the optimum sensitivity and specificity for the definition of a malarial episode. Diagnostic likelihood ratios (DLR) showed that a positive result in the OptiMal test (DLR = infinity) was a better indication of malaria than a positive result in the ICT (DLR = 3.85). In fact, OptiMal had diagnostic reliability (0.93) which approached that of an ideal test and, since it only detects live parasites, OptiMal is superior to the ICT in monitoring therapeutic responses. Although the RIT may seem attractive for use in primary health facilities because relatively inexperienced staff can perform them, the high cost of these tests is prohibitive. In holo-endemic areas, use of RIT or microscopical examination of bloodsmears may only be relevant when malaria needs to be excluded as a cause of illness (e.g. prior to treatment with toxic or expensive drugs, or during malaria epidemics). Wherever the effective drugs for the first-line treatment of malaria are cheap (e.g. chloroquine and Fansidar), treatment based on clinical diagnosis alone should prove cost-saving in health facilities without microscopy.

  11. Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling

    NASA Astrophysics Data System (ADS)

    Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.

    2018-05-01

    Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.

  12. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  13. CT dose minimization using personalized protocol optimization and aggressive bowtie

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Yin, Zhye; Jin, Yannan; Wu, Mingye; Yao, Yangyang; Tao, Kun; Kalra, Mannudeep K.; De Man, Bruno

    2016-03-01

    In this study, we propose to use patient-specific x-ray fluence control to reduce the radiation dose to sensitive organs while still achieving the desired image quality (IQ) in the region of interest (ROI). The mA modulation profile is optimized view by view, based on the sensitive organs and the ROI, which are obtained from an ultra-low-dose volumetric CT scout scan [1]. We use a clinical chest CT scan to demonstrate the feasibility of the proposed concept: the breast region is selected as the sensitive organ region while the cardiac region is selected as IQ ROI. Two groups of simulations are performed based on the clinical CT dataset: (1) a constant mA scan adjusted based on the patient attenuation (120 kVp, 300 mA), which serves as baseline; (2) an optimized scan with aggressive bowtie and ROI centering combined with patient-specific mA modulation. The results shows that the combination of the aggressive bowtie and the optimized mA modulation can result in 40% dose reduction in the breast region, while the IQ in the cardiac region is maintained. More generally, this paper demonstrates the general concept of using a 3D scout scan for optimal scan planning.

  14. Monte Carlo design of optimal wire mesh collimator for breast tumor imaging process

    NASA Astrophysics Data System (ADS)

    Saad, W. H. M.; Roslan, R. E.; Mahdi, M. A.; Choong, W.-S.; Saion, E.; Saripan, M. I.

    2011-08-01

    This paper presents the modeling of breast tumor imaging process using wire mesh collimator gamma camera. Previous studies showed that the wire mesh collimator has a potential to improve the sensitivity of the tumor detection. In this paper, we extend our research significantly, to find an optimal configuration of the wire mesh collimator specifically for semi-compressed breast tumor detection, by looking into four major factors: weight, sensitivity, spatial resolution and tumor contrast. The numbers of layers in the wire mesh collimator is varied to optimize the collimator design. The statistical variations of the results are studied by simulating multiple realizations for each experiment using different starting random numbers. All the simulation environments are modeled using Monte Carlo N-Particle Code (MCNP). The quality of the detection is measured directly by comparing the sensitivity, spatial resolution and tumor contrast of the images produced by the wire mesh collimator and benchmarked that with a standard multihole collimator. The proposed optimal configuration of the wire mesh collimator is optimized by selecting the number of layers in wire mesh collimator, where the tumor contrast shows a relatively comparable value to the multihole collimator, when it is tested with uniformly semi-compressed breast phantom. The wire mesh collimator showed higher number of sensitivity because of its loose arrangement while the spatial resolution of wire mesh collimator does not shows much different compared to the multihole collimator. With a relatively good tumor contrast and spatial resolution, and increased in sensitivity, a new proposed wire mesh collimator gives a significant improvement in the wire mesh collimator design for breast cancer imaging process. The proposed collimator configuration is reduced to 44.09% from the total multihole collimator weight.

  15. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1994-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.

  16. Emergency Coagulation Assessment During Treatment With Direct Oral Anticoagulants: Limitations and Solutions.

    PubMed

    Ebner, Matthias; Birschmann, Ingvild; Peter, Andreas; Härtig, Florian; Spencer, Charlotte; Kuhn, Joachim; Blumenstock, Gunnar; Zuern, Christine S; Ziemann, Ulf; Poli, Sven

    2017-09-01

    In patients receiving direct oral anticoagulants (DOACs), emergency treatment like thrombolysis for acute ischemic stroke is complicated by insufficient availability of DOAC-specific coagulation tests. Conflicting recommendations have been published concerning the use of global coagulation assays for ruling out relevant DOAC-induced anticoagulation. Four hundred eighty-one samples from 96 DOAC-treated patients were tested using prothrombin time (PT), activated partial thromboplastin time (aPTT) and thrombin time (TT), DOAC-specific assays (anti-Xa activity, diluted TT), and liquid chromatography-tandem mass spectrometry. Sensitivity and specificity of test results to identify DOAC concentrations <30 ng/mL were calculated. Receiver operating characteristic analyses were used to define reagent-specific cutoff values. Normal PT and aPTT provide insufficient specificity to safely identify DOAC concentrations <30 ng/mL (rivaroxaban/PT: specificity, 77%/sensitivity, 94%; apixaban/PT: specificity, 13%/sensitivity, 94%, dabigatran/aPTT: specificity, 49%/sensitivity, 91%). Normal TT was 100% specific for dabigatran, but sensitivity was 26%. In contrast, reagent-specific PT and aPTT cutoffs provided >95% specificity and a specific TT cutoff enhanced sensitivity for dabigatran to 84%. For apixaban, no cutoffs could be established. Even if highly DOAC-reactive reagents are used, normal results of global coagulation tests are not suited to guide emergency treatment: whereas normal PT and aPTT lack specificity to rule out DOAC-induced anticoagulation, the low sensitivity of normal TT excludes the majority of eligible patients from treatment. However, reagent-specific cutoffs for global coagulation tests ensure high specificity and optimize sensitivity for safe emergency decision making in rivaroxaban- and dabigatran-treated patients. URL: http://www.clinicaltrials.gov. Unique identifiers: NCT02371044 and NCT02371070. © 2017 American Heart Association, Inc.

  17. Polarimetry noise in fiber-based optical coherence tomography instrumentation

    PubMed Central

    Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2011-01-01

    High noise levels in fiber-based polarization-sensitive optical coherence tomography (PS-OCT) have broadly limited its clinical utility. In this study we investigate contribution of polarization mode dispersion (PMD) to the polarimetry noise. We develop numerical models of the PS-OCT system including PMD and validate these models with empirical data. Using these models, we provide a framework for predicting noise levels, for processing signals to reduce noise, and for designing an optimized system. PMID:21935044

  18. Determination of the optimal case definition for the diagnosis of end-stage renal disease from administrative claims data in Manitoba, Canada.

    PubMed

    Komenda, Paul; Yu, Nancy; Leung, Stella; Bernstein, Keevin; Blanchard, James; Sood, Manish; Rigatto, Claudio; Tangri, Navdeep

    2015-01-01

    End-stage renal disease (ESRD) is a major public health problem with increasing prevalence and costs. An understanding of the long-term trends in dialysis rates and outcomes can help inform health policy. We determined the optimal case definition for the diagnosis of ESRD using administrative claims data in the province of Manitoba over a 7-year period. We determined the sensitivity, specificity, predictive value and overall accuracy of 4 administrative case definitions for the diagnosis of ESRD requiring chronic dialysis over different time horizons from Jan. 1, 2004, to Mar. 31, 2011. The Manitoba Renal Program Database served as the gold standard for confirming dialysis status. During the study period, 2562 patients were registered as recipients of chronic dialysis in the Manitoba Renal Program Database. Over a 1-year period (2010), the optimal case definition was any 2 claims for outpatient dialysis, and it was 74.6% sensitive (95% confidence interval [CI] 72.3%-76.9%) and 94.4% specific (95% CI 93.6%-95.2%) for the diagnosis of ESRD. In contrast, a case definition of at least 2 claims for dialysis treatment more than 90 days apart was 64.8% sensitive (95% CI 62.2%-67.3%) and 97.1% specific (95% CI 96.5%-97.7%). Extending the period to 5 years greatly improved sensitivity for all case definitions, with minimal change to specificity; for example, for the optimal case definition of any 2 claims for dialysis treatment, sensitivity increased to 86.0% (95% CI 84.7%-87.4%) at 5 years. Accurate case definitions for the diagnosis of ESRD requiring dialysis can be derived from administrative claims data. The optimal definition required any 2 claims for outpatient dialysis. Extending the claims period to 5 years greatly improved sensitivity with minimal effects on specificity for all case definitions.

  19. Comparison of two laryngeal tissue fiber constitutive models

    NASA Astrophysics Data System (ADS)

    Hunter, Eric J.; Palaparthi, Anil Kumar Reddy; Siegmund, Thomas; Chan, Roger W.

    2014-02-01

    Biological tissues are complex time-dependent materials, and the best choice of the appropriate time-dependent constitutive description is not evident. This report reviews two constitutive models (a modified Kelvin model and a two-network Ogden-Boyce model) in the characterization of the passive stress-strain properties of laryngeal tissue under tensile deformation. The two models are compared, as are the automated methods for parameterization of tissue stress-strain data (a brute force vs. a common optimization method). Sensitivity (error curves) of parameters from both models and the optimized parameter set are calculated and contrast by optimizing to the same tissue stress-strain data. Both models adequately characterized empirical stress-strain datasets and could be used to recreate a good likeness of the data. Nevertheless, parameters in both models were sensitive to measurement errors or uncertainties in stress-strain, which would greatly hinder the confidence in those parameters. The modified Kelvin model emerges as a potential better choice for phonation models which use a tissue model as one component, or for general comparisons of the mechanical properties of one type of tissue to another (e.g., axial stress nonlinearity). In contrast, the Ogden-Boyce model would be more appropriate to provide a basic understanding of the tissue's mechanical response with better insights into the tissue's physical characteristics in terms of standard engineering metrics such as shear modulus and viscosity.

  20. Incorporating BIRD-based homodecoupling in the dual-optimized, inverted 1 JCC 1,n-ADEQUATE experiment.

    PubMed

    Saurí, Josep; Bermel, Wolfgang; Parella, Teodor; Thomas Williamson, R; Martin, Gary E

    2018-03-13

    1,n-ADEQUATE is a powerful NMR technique for elucidating the structure of proton-deficient small molecules that can help establish the carbon skeleton of a given molecule by providing long-range three-bond 13 C─ 13 C correlations. Care must be taken when using the experiment to identify the simultaneous presence of one-bond 13 C─ 13 C correlations that are not filtered out, unlike the HMBC experiment that has a low-pass J-filter to filter 1 J CH responses out. Dual-optimized, inverted 1 J CC 1,n-ADEQUATE is an improved variant of the experiment that affords broadband inversion of direct responses, obviating the need to take additional steps to identify these correlations. Even though ADEQUATE experiments can now be acquired in a reasonable amount of experimental time if a cryogenic probe is available, low sensitivity is still the main impediment limiting the application of this elegant experiment. Here, we wish to report a further refinement that incorporates real-time bilinear rotation decoupling-based homodecoupling methodology into the dual-optimized, inverted 1 J CC 1,n-ADEQUATE pulse sequence. Improved sensitivity and resolution are achieved by collapsing homonuclear proton-proton couplings from the observed multiplets for most spin systems. The application of the method is illustrated with several model compounds. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Selection criteria for wear resistant powder coatings under extreme erosive wear conditions

    NASA Astrophysics Data System (ADS)

    Kulu, P.; Pihl, T.

    2002-12-01

    Wear-resistant thermal spray coatings for sliding wear are hard but brittle (such as carbide and oxide based coatings), which makes them useless under impact loading conditions and sensitive to fatigue. Under extreme conditions of erosive wear (impact loading, high hardness of abrasives, and high velocity of abradant particles), composite coatings ensure optimal properties of hardness and toughness. The article describes tungsten carbide-cobalt (WC-Co) systems and self-fluxing alloys, containing tungsten carbide based hardmetal particles [NiCrSiB-(WC-Co)] deposited by the detonation gun, continuous detonation spraying, and spray fusion processes. Different powder compositions and processes were studied, and the effect of the coating structure and wear parameters on the wear resistance of coatings are evaluated. The dependence of the wear resistance of sprayed and fused coatings on their hardness is discussed, and hardness criteria for coating selection are proposed. The so-called “double cemented” structure of WC-Co based hardmetal or metal matrix composite coatings, as compared with a simple cobalt matrix containing particles of WC, was found optimal. Structural criteria for coating selection are provided. To assist the end user in selecting an optimal deposition method and materials, coating selection diagrams of wear resistance versus hardness are given. This paper also discusses the cost-effectiveness of coatings in the application areas that are more sensitive to cost, and composite coatings based on recycled materials are offered.

  2. Efficient amplitude-modulated pulses for triple- to single-quantum coherence conversion in MQMAS NMR.

    PubMed

    Colaux, Henri; Dawson, Daniel M; Ashbrook, Sharon E

    2014-08-07

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed "too challenging".

  3. Efficient Amplitude-Modulated Pulses for Triple- to Single-Quantum Coherence Conversion in MQMAS NMR

    PubMed Central

    2014-01-01

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed “too challenging”. PMID:25047226

  4. Cardiac surgery-associated acute kidney injury

    PubMed Central

    Ortega-Loubon, Christian; Fernández-Molina, Manuel; Carrascal-Hinojal, Yolanda; Fulquet-Carreras, Enrique

    2016-01-01

    Cardiac surgery-associated acute kidney injury (CSA-AKI) is a well-recognized complication resulting with the higher morbid-mortality after cardiac surgery. In its most severe form, it increases the odds ratio of operative mortality 3–8-fold, length of stay in the Intensive Care Unit and hospital, and costs of care. Early diagnosis is critical for an optimal treatment of this complication. Just as the identification and correction of preoperative risk factors, the use of prophylactic measures during and after surgery to optimize renal function is essential to improve postoperative morbidity and mortality of these patients. Cardiopulmonary bypass produces an increased in tubular damage markers. Their measurement may be the most sensitive means of early detection of AKI because serum creatinine changes occur 48 h to 7 days after the original insult. Tissue inhibitor of metalloproteinase-2 and insulin-like growth factor-binding protein 7 are most promising as an early diagnostic tool. However, the ideal noninvasive, specific, sensitive, reproducible biomarker for the detection of AKI within 24 h is still not found. This article provides a review of the different perspectives of the CSA-AKI, including pathogenesis, risk factors, diagnosis, biomarkers, classification, postoperative management, and treatment. We searched the electronic databases, MEDLINE, PubMed, EMBASE using search terms relevant including pathogenesis, risk factors, diagnosis, biomarkers, classification, postoperative management, and treatment, in order to provide an exhaustive review of the different perspectives of the CSA-AKI. PMID:27716701

  5. Comparing efficacy of reduced-toxicity allogeneic hematopoietic cell transplantation with conventional chemo-(immuno) therapy in patients with relapsed or refractory CLL: a Markov decision analysis.

    PubMed

    Kharfan-Dabaja, M A; Pidala, J; Kumar, A; Terasawa, T; Djulbegovic, B

    2012-09-01

    Despite therapeutic advances, relapsed/refractory CLL, particularly after fludarabine-based regimens, remains a major challenge for which optimal therapy is undefined. No randomized comparative data exist to suggest the superiority of reduced-toxicity allogeneic hematopoietic cell transplantation (RT-allo-HCT) over conventional chemo-(immuno) therapy (CCIT). By using estimates from a systematic review and by meta-analysis of available published evidence, we constructed a Markov decision model to examine these competing modalities. Cohort analysis demonstrated superior outcome for RT-allo-HCT, with a 10-month overall life expectancy (and 6-month quality-adjusted life expectancy (QALE)) advantage over CCIT. Although the model was sensitive to changes in base-case assumptions and transition probabilities, RT-allo-HCT provided superior overall life expectancy through a range of values supported by the meta-analysis. QALE was superior for RT-allo-HCT compared with CCIT. This conclusion was sensitive to change in the anticipated state utility associated with the post-allogeneic HCT state; however, RT-allo-HCT remained the optimal strategy for values supported by existing literature. This analysis provides a quantitative comparison of outcomes between RT-allo-HCT and CCIT for relapsed/refractory CLL in the absence of randomized comparative trials. Confirmation of these findings requires a prospective randomized trial, which compares the most effective RT-allo-HCT and CCIT regimens for relapsed/refractory CLL.

  6. Label-free detection of biomolecules with Ta2O5-based field effect devices

    NASA Astrophysics Data System (ADS)

    Branquinho, Rita Maria Mourao Salazar

    Field-effect-based devices (FEDs) are becoming a basic structural element in a new generation of micro biosensors. Their numerous advantages such as small size, labelfree response and versatility, together with the possibility of on-chip integration of biosensor arrays with a future prospect of low-cost mass production, make their development highly desirable. The present thesis focuses on the study and optimization of tantalum pentoxide (Ta2O5) deposited by rf magnetron sputtering at room temperature, and their application as sensitive layer in biosensors based on field effect devices (BioFEDs). As such, the influence of several deposition parameters and post-processing annealing temperature and surface plasma treatment on the film¡¦s properties was investigated. Electrolyte-insulator-semiconductor (EIS) field-effect-based sensors comprising the optimized Ta2O5 sensitive layer were applied to the development of BioFEDs. Enzyme functionalized sensors (EnFEDs) were produced for penicillin detection. These sensors were also applied to the label free detection of DNA and the monitoring of its amplification via polymerase chain reaction (PCR), real time PCR (RT-PCR) and loop mediated isothermal amplification (LAMP). Ion sensitive field effect transistors (ISFETs) based on semiconductor oxides comprising the optimized Ta2O5 sensitive layer were also fabricated. EIS sensors comprising Ta2O5 films produced with optimized conditions demonstrated near Nernstian pH sensitivity, 58+/-0.3 mV/pH. These sensors were successfully applied to the label-free detection of penicillin and DNA. Penicillinase functionalized sensors showed a 29+/-7 mV/mM sensitivity towards penicillin detection up to 4 mM penicillin concentration. DNA detection was achieved with 30 mV/mugM sensitivity and DNA amplification monitoring with these sensors showed comparable results to those obtained with standard fluorescence based methods. Semiconductor oxides-based ISFETs with Ta2O5 sensitive layer were also produced. Finally, the high quality and sensitivity demonstrated by Ta2O5 thin films produced at low temperature by rf magnetron sputtering allows for their application as sensitive layer in field effect sensors.

  7. Simplified and Efficient Quantification of Low-abundance Proteins at Very High Multiplex via Targeted Mass Spectrometry*

    PubMed Central

    Burgess, Michael W.; Keshishian, Hasmik; Mani, D. R.; Gillette, Michael A.; Carr, Steven A.

    2014-01-01

    Liquid chromatography–multiple reaction monitoring mass spectrometry (LC-MRM-MS) of plasma that has been depleted of abundant proteins and fractionated at the peptide level into six to eight fractions is a proven method for quantifying proteins present at low nanogram-per-milliliter levels. A drawback of fraction-MRM is the increased analysis time due to the generation of multiple fractions per biological sample. We now report that the use of heated, long, fused silica columns (>30 cm) packed with 1.9 μm of packing material can reduce or eliminate the need for fractionation prior to LC-MRM-MS without a significant loss of sensitivity or precision relative to fraction-MRM. We empirically determined the optimal column length, temperature, gradient duration, and sample load for such assays and used these conditions to study detection sensitivity and assay precision. In addition to increased peak capacity, longer columns packed with smaller beads tolerated a 4- to 6-fold increase in analyte load without a loss of robustness or reproducibility. The longer columns also provided a 4-fold improvement in median limit-of-quantitation values with increased assay precision relative to the standard 12 cm columns packed with 3 μm material. Overall, the optimized chromatography provided an approximately 3-fold increase in analysis throughput with excellent robustness and less than a 2-fold reduction in quantitative sensitivity relative to fraction-MRM. The value of the system for increased multiplexing was demonstrated by the ability to configure an 800-plex MRM-MS assay, run in a single analysis, comprising 2400 transitions with retention time scheduling to monitor 400 unlabeled and heavy labeled peptide pairs. PMID:24522978

  8. Convergence Estimates for Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal

    1997-01-01

    A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.

  9. Highly Sensitive Refractive Index Sensors with Plasmonic Nanoantennas-Utilization of Optimal Spectral Detuning of Fano Resonances.

    PubMed

    Mesch, Martin; Weiss, Thomas; Schäferling, Martin; Hentschel, Mario; Hegde, Ravi S; Giessen, Harald

    2018-05-25

    We analyze and optimize the performance of coupled plasmonic nanoantennas for refractive index sensing. The investigated structure supports a sub- and super-radiant mode that originates from the weak coupling of a dipolar and quadrupolar mode, resulting in a Fano-type spectral line shape. In our study, we vary the near-field coupling of the two modes and particularly examine the influence of the spectral detuning between them on the sensing performance. Surprisingly, the case of matched resonance frequencies does not provide the best sensor. Instead, we find that the right amount of coupling strength and spectral detuning allows for achieving the ideal combination of narrow line width and sufficient excitation strength of the subradiant mode, and therefore results in optimized sensor performance. Our findings are confirmed by experimental results and first-order perturbation theory. The latter is based on the resonant state expansion and provides direct access to resonance frequency shifts and line width changes as well as the excitation strength of the modes. Based on these parameters, we define a figure of merit that can be easily calculated for different sensing geometries and agrees well with the numerical and experimental results.

  10. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  12. Fuel optimization for low-thrust Earth-Moon transfer via indirect optimal control

    NASA Astrophysics Data System (ADS)

    Pérez-Palau, Daniel; Epenoy, Richard

    2018-02-01

    The problem of designing low-energy transfers between the Earth and the Moon has attracted recently a major interest from the scientific community. In this paper, an indirect optimal control approach is used to determine minimum-fuel low-thrust transfers between a low Earth orbit and a Lunar orbit in the Sun-Earth-Moon Bicircular Restricted Four-Body Problem. First, the optimal control problem is formulated and its necessary optimality conditions are derived from Pontryagin's Maximum Principle. Then, two different solution methods are proposed to overcome the numerical difficulties arising from the huge sensitivity of the problem's state and costate equations. The first one consists in the use of continuation techniques. The second one is based on a massive exploration of the set of unknown variables appearing in the optimality conditions. The dimension of the search space is reduced by considering adapted variables leading to a reduction of the computational time. The trajectories found are classified in several families according to their shape, transfer duration and fuel expenditure. Finally, an analysis based on the dynamical structure provided by the invariant manifolds of the two underlying Circular Restricted Three-Body Problems, Earth-Moon and Sun-Earth is presented leading to a physical interpretation of the different families of trajectories.

  13. Setting the magic angle for fast magic-angle spinning probes.

    PubMed

    Penzel, Susanne; Smith, Albert A; Ernst, Matthias; Meier, Beat H

    2018-06-15

    Fast magic-angle spinning, coupled with 1 H detection is a powerful method to improve spectral resolution and signal to noise in solid-state NMR spectra. Commercial probes now provide spinning frequencies in excess of 100 kHz. Then, one has sufficient resolution in the 1 H dimension to directly detect protons, which have a gyromagnetic ratio approximately four times larger than 13 C spins. However, the gains in sensitivity can quickly be lost if the rotation angle is not set precisely. The most common method of magic-angle calibration is to optimize the number of rotary echoes, or sideband intensity, observed on a sample of KBr. However, this typically uses relatively low spinning frequencies, where the spinning of fast-MAS probes is often unstable, and detection on the 13 C channel, for which fast-MAS probes are typically not optimized. Therefore, we compare the KBr-based optimization of the magic angle with two alternative approaches: optimization of the splitting observed in 13 C-labeled glycine-ethylester on the carbonyl due to the Cα-C' J-coupling, or optimization of the H-N J-coupling spin echo in the protein sample itself. The latter method has the particular advantage that no separate sample is necessary for the magic-angle optimization. Copyright © 2018. Published by Elsevier Inc.

  14. Objective comparison of lesion detectability in low and medium-energy collimator iodine-123 mIBG images using a channelized Hotelling observer

    NASA Astrophysics Data System (ADS)

    Gregory, Rebecca A.; Murray, Iain; Gear, Jonathan; Aldridge, Matthew D.; Levine, Daniel; Fowkes, Lucy; Waddington, Wendy A.; Chua, Sue; Flux, Glenn

    2017-01-01

    Iodine-123 mIBG imaging is widely regarded as a gold standard for diagnostic studies of neuroblastoma and adult neuroendocrine cancer although the optimal collimator for tumour imaging remains undetermined. Low-energy (LE) high-resolution (HR) collimators provide superior spatial resolution. However due to septal penetration of high-energy photons these provide poorer contrast than medium-energy (ME) general-purpose (GP) collimators. LEGP collimators improve count sensitivity. The aim of this study was to objectively compare the lesion detection efficiency of each collimator to determine the optimal collimator for diagnostic imaging. The septal penetration and sensitivity of each collimator was assessed. Planar images of the patient abdomen were simulated with static scans of a Liqui-Phil™ anthropomorphic phantom with lesion-shaped inserts, acquired with LE and ME collimators on 3 different manufacturers’ gamma camera systems (Skylight (Philips), Intevo (Siemens) and Discovery (GE)). Two-hundred normal and 200 single-lesion abnormal images were created for each collimator. A channelized Hotelling observer (CHO) was developed and validated to score the images for the likelihood of an abnormality. The areas under receiver-operator characteristic (ROC) curves, Az, created from the scores were used to quantify lesion detectability. The CHO ROC curves for the LEHR collimators were inferior to the GP curves for all cameras. The LEHR collimators resulted in statistically significantly smaller Azs (p  <  0.05), of on average 0.891  ±  0.004, than for the MEGP collimators, 0.933  ±  0.004. In conclusion, the reduced background provided by MEGP collimators improved 123I mIBG image lesion detectability over LEHR collimators that provided better spatial resolution.

  15. Photonic Jets for Strained-Layer Superlattice Infrared Photodetector Enhancement

    DTIC Science & Technology

    2014-06-25

    top of a 40 µm photodetector fixed into position using a silicone rubber . As illustrated in Fig. 2, the spectral response was characterized before and...midwave-infrared spectral band (3-5 ?m). We optimized the design of these structures and experimentally demonstrated the increased sensitivity compared to...midwave-infrared spectral band (3-5 ?m). We optimized the design of these structures and experimentally demonstrated the increased sensitivity

  16. X-ray Polarimetry with a Micro-Pattern Gas Detector

    NASA Technical Reports Server (NTRS)

    Hill, Joe

    2005-01-01

    Topics covered include: Science drivers for X-ray polarimetry; Previous X-ray polarimetry designs; The photoelectric effect and imaging tracks; Micro-pattern gas polarimeter design concept. Further work includes: Verify results against simulator; Optimize pressure and characterize different gases for a given energy band; Optimize voltages for resolution and sensitivity; Test meshes with 80 micron pitch; Characterize ASIC operation; and Quantify quantum efficiency for optimum polarization sensitivity.

  17. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  18. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  19. A photonic crystal fiber glucose sensor filled with silver nanowires

    NASA Astrophysics Data System (ADS)

    Yang, X. C.; Lu, Y.; Wang, M. T.; Yao, J. Q.

    2016-01-01

    We report a photonic crystal fiber glucose sensor filled with silver nanowires in this paper. The proposed sensor is both analyzed by COMSOL multiphysics software and demonstrated by experiments. The extremely high average spectral sensitivity 19009.17 nm/RIU for experimental measurement is obtained, equivalent to 44.25 mg/dL of glucose in water, which is lower than 70 mg/dL for efficient detection of hypoglycemia episodes. The silver nanowires diameter which may affect the sensor's spectral sensitivity is also discussed and an optimal range of silver nanowires diameter 90-120 nm is obtained. We expect that the sensor can provide an effective platform for glucose sensing and potentially leading to a further development towards minimal-invasive glucose measurement.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.

    Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less

  1. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    NASA Astrophysics Data System (ADS)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  2. Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing

    NASA Astrophysics Data System (ADS)

    Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin

    2017-06-01

    This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.

  3. Decentralized control of large-scale systems: Fixed modes, sensitivity and parametric robustness. Ph.D. Thesis - Universite Paul Sabatier, 1985

    NASA Technical Reports Server (NTRS)

    Tarras, A.

    1987-01-01

    The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yongxi

    We propose an integrated modeling framework to optimally locate wireless charging facilities along a highway corridor to provide sufficient in-motion charging. The integrated model consists of a master, Infrastructure Planning Model that determines best locations with integrated two sub-models that explicitly capture energy consumption and charging and the interactions between electric vehicle and wireless charging technologies, geometrics of highway corridors, speed, and auxiliary system. The model is implemented in an illustrative case study of a highway corridor of Interstate 5 in Oregon. We found that the cost of establishing the charging lane is sensitive and increases with the speed tomore » achieve. Through sensitivity analyses, we gain better understanding on the extent of impacts of geometric characteristics of highways and battery capacity on the charging lane design.« less

  5. A rapid electrochemical monitoring platform for sensitive determination of thiamethoxam based on β-cyclodextrin-graphene composite.

    PubMed

    Zhai, XingChen; Zhang, Hua; Zhang, Min; Yang, Xin; Gu, Cheng; Zhou, GuoPeng; Zhao, HaiTian; Wang, ZhenYu; Dong, AiJun; Wang, Jing

    2017-08-01

    A rapid monitoring platform for sensitive voltammetric detection of thiamethoxam residues is reported in the present study. A β-cyclodextrin-reduced graphene oxide composite was used as a reinforcing material in electrochemical determination of thiamethoxam. Compared with bare glassy carbon electrodes, the reduction peak currents of thiamethoxam at reduced graphene oxide/glassy carbon electrode and β-cyclodextrin-reduced graphene oxide/glassy carbon electrode were increased by 70- and 124-fold, respectively. The experimental conditions influencing voltammetric determination of thiamethoxam, such as the amount of β-cyclodextrin-reduced graphene oxide, solution pH, temperature, and accumulation time, were optimized. The reduction mechanism and binding affinity of this material is also discussed. Under optimal conditions, the reduction peak currents increased linearly between 0.5 µM and 16 µM concentration of thiamethoxam. The limit of detection was 0.27 µM on the basis of a signal-to-noise ratio of 3. When the proposed method was applied to brown rice in a recovery test, the recoveries were between 92.20% and 113.75%. The results were in good concordance with the high-performance liquid chromatography method. The proposed method therefore provides a promising and effective platform for sensitive and rapid determination of thiamethoxam. Environ Toxicol Chem 2017;36:1991-1997. © 2017 SETAC. © 2017 SETAC.

  6. Determination of trace amino acids in human serum by a selective and sensitive pre-column derivatization method using HPLC-FLD-MS/MS and derivatization optimization by response surface methodology.

    PubMed

    Li, Guoliang; Cui, Yanyan; You, Jinmao; Zhao, Xianen; Sun, Zhiwei; Xia, Lian; Suo, Yourui; Wang, Xiao

    2011-04-01

    Analysis of trace amino acids (AA) in physiological fluids has received more attention, because the analysis of these compounds could provide fundamental and important information for medical, biological, and clinical researches. More accurate method for the determination of those compounds is highly desirable and valuable. In the present study, we developed a selective and sensitive method for trace AA determination in biological samples using 2-[2-(7H-dibenzo [a,g]carbazol-7-yl)-ethoxy] ethyl chloroformate (DBCEC) as labeling reagent by HPLC-FLD-MS/MS. Response surface methodology (RSM) was first employed to optimize the derivatization reaction between DBCEC and AA. Compared with traditional single-factor design, RSM was capable of lessening laborious, time and reagents consumption. The complete derivatization can be achieved within 6.3 min at room temperature. In conjunction with a gradient elution, a baseline resolution of 20 AA containing acidic, neutral, and basic AA was achieved on a reversed-phase Hypersil BDS C(18) column. This method showed excellent reproducibility and correlation coefficient, and offered the exciting detection limits of 0.19-1.17 fmol/μL. The developed method was successfully applied to determinate AA in human serum. The sensitive and prognostic index of serum AA for liver diseases has also been discussed.

  7. Nano-biosensor for highly sensitive detection of HER2 positive breast cancer.

    PubMed

    Salahandish, Razieh; Ghaffarinejad, Ali; Naghib, Seyed Morteza; Majidzadeh-A, Keivan; Zargartalebi, Hossein; Sanati-Nezhad, Amir

    2018-05-25

    Nanocomposite materials have provided a wide range of conductivity, sensitivity, selectivity and linear response for electrochemical biosensors. However, the detection of rare cells at single cell level requires a new class of nanocomposite-coated electrodes with exceptional sensitivity and specificity. We recently developed a construct of gold nanoparticle-grafted functionalized graphene and nanostructured polyaniline (PANI) for high-performance biosensing within a very wide linear response and selective performance. Further, replacing the expensive gold nanoparticles with low-cost silver nanoparticles as well as optimizing the nanocomposite synthesis and functionalization protocols on the electrode surface in this work enabled us to develop ultrasensitive nanocomposites for label-free detection of breast cancer cells. The sensor presented a fast response time of 30 min within a dynamic range of 10 - 5 × 10 6 cells mL -1 and with a detection limit of 2 cells mL -1 for the detection of SK-BR3 breast cancer cell. The nano-biosensor, for the first time, demonstrated a high efficiency of > 90% for the label-free detection of cancer cells in whole blood sample without any need for sample preparation and cell staining. The results demonstrated that the optimized nanocomposite developed in this work is a promising nanomaterial for electrochemical biosensing and with the potential applications in electro-catalysis and super-capacitances. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Optimal management of colorectal liver metastases in older patients: a decision analysis

    PubMed Central

    Yang, Simon; Alibhai, Shabbir MH; Kennedy, Erin D; El-Sedfy, Abraham; Dixon, Matthew; Coburn, Natalie; Kiss, Alex; Law, Calvin HL

    2014-01-01

    Background Comparative trials evaluating management strategies for colorectal cancer liver metastases (CLM) are lacking, especially for older patients. This study developed a decision-analytic model to quantify outcomes associated with treatment strategies for CLM in older patients. Methods A Markov-decision model was built to examine the effect on life expectancy (LE) and quality-adjusted life expectancy (QALE) for best supportive care (BSC), systemic chemotherapy (SC), radiofrequency ablation (RFA) and hepatic resection (HR). The baseline patient cohort assumptions included healthy 70-year-old CLM patients after a primary cancer resection. Event and transition probabilities and utilities were derived from a literature review. Deterministic and probabilistic sensitivity analyses were performed on all study parameters. Results In base case analysis, BSC, SC, RFA and HR yielded LEs of 11.9, 23.1, 34.8 and 37.0 months, and QALEs of 7.8, 13.2, 22.0 and 25.0 months, respectively. Model results were sensitive to age, comorbidity, length of model simulation and utility after HR. Probabilistic sensitivity analysis showed increasing preference for RFA over HR with increasing patient age. Conclusions HR may be optimal for healthy 70-year-old patients with CLM. In older patients with comorbidities, RFA may provide better LE and QALE. Treatment decisions in older cancer patients should account for patient age, comorbidities, local expertise and individual values. PMID:24961482

  9. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  10. Saturated fatty acid determination method using paired ion electrospray ionization mass spectrometry coupled with capillary electrophoresis.

    PubMed

    Lee, Ji-Hyun; Kim, Su-Jin; Lee, Sul; Rhee, Jin-Kyu; Lee, Soo Young; Na, Yun-Cheol

    2017-09-01

    A sensitive and selective capillary electrophoresis-mass spectrometry (CE-MS) method for determination of saturated fatty acids (FAs) was developed by using dicationic ion-pairing reagents forming singly charged complexes with anionic FAs. For negative ESI detection, 21 anionic FAs at pH 10 were separated using ammonium formate buffer containing 40% acetonitrile modifier in normal polarity mode in CE by optimizing various parameters. This method showed good separation efficiency, but the sensitivity of the method to short-chain fatty acids was quite low, causing acetic and propionic acids to be undetectable even at 100 mgL -1 in negative ESI-MS detection. Out of the four dicationic ion-pairing reagents tested, N,N'-dibutyl 1,1'-pentylenedipyrrolidium infused through a sheath-liquid ion source during CE separation was the best reagent regarding improved sensitivity and favorably complexed with anionic FAs for detection in positive ion ESI-MS. The monovalent complex showed improved ionization efficiency, providing the limits of detection (LODs) for 15 FAs ranging from 0.13 to 2.88 μg/mL and good linearity (R 2  > 0.99) up to 150 μg/mL. Compared to the negative detection results, the effect was remarkable for the detection of short- and medium-chain fatty acids. The optimized CE-paired ion electrospray (PIESI)-MS method was utilized for the determination of FAs in cheese and coffee with simple pretreatment. This method may be extended for sensitive analysis of unsaturated fatty acids. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  12. Optimal Reservoir Operation using Stochastic Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sahu, R.; McLaughlin, D.

    2016-12-01

    Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.

  13. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  14. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  15. Experimental characterization of recurrent ovarian immature teratoma cells after optimal surgery.

    PubMed

    Tanaka, Tetsuji; Toujima, Saori; Utsunomiya, Tomoko; Yukawa, Kazunori; Umesaki, Naohiko

    2008-07-01

    Minimal optimal surgery without chemotherapy is often performed for patients with ovarian immature teratoma, which frequently occurs in young women who hope for future pregnancies. If tumors recur after the operation, anticancer drug chemotherapy is often administered, although few studies have highlighted differences between the recurrent and the primary tumor cells. Therefore, we have established experimental animal models of recurrent ovarian immature teratoma cells after optimal surgery and characterized the anticancer drug sensitivity and antigenicity of the recurrent tumors. Surgically-excised tumor cells of a grade II ovarian immature teratoma were cultured in vitro and transplanted into nude mice to establish stable cell lines. Differential drug sensitivity and antigenicity of the tumor cells were compared between the primary and the nude mouse tumors. Nude mouse tumor cells showed a normal 46XX karyotype. Cultured primary cells showed a remarkably high sensitivity to paclitaxel, docetaxel, adriamycin and pirarubicin, compared to peritoneal cancer cells obtained from a patient with ovarian adenocarcinomatous peritonitis. The drug sensitivity of teratoma cells to 5-fluorouracil, bleomycin or peplomycin was also significantly higher. However, there was no significant difference in sensitivity to platinum drugs between the primary teratoma and the peritoneal adenocarcinoma cells. As for nude mouse tumor cells, sensitivity to 12 anticancer drugs was significantly lower than that of the primary tumor cells, while there was little difference in sensitivity to carboplatin or peplomycin between the primary and nude mouse tumor cells. Flow cytometry showed that the expression of smooth muscle actin (SMA) significantly decreased in nude mouse tumor cells when compared to cultured primary cells. In conclusion, ovarian immature teratomas with normal karyotypes have a malignant potential to recur after minimal surgery. During nude mouse transplantation, SMA-overexpressing cells appeared to be selectively excluded and nude mouse tumor cells were less sensitive to the majority of anticancer drugs than the primary tumor cells. These results indicate that after optimal surgery for ovarian immature teratoma, recurrent cells can be more resistant to anticancer drugs than the primary tumors. Therefore, it is likely that adjuvant chemotherapy lowers the risk of ovarian immature teratomas recurring after optimal surgery. BEP and PBV regimens are frequently given to teratoma patients. However, paclitaxel/carboplatin or docetaxel/carboplatin, which are the most effective chemotherapy treatments for epithelial ovarian cancer patients, are considered to be an alternative regimen, especially in the prevention of reproductive toxicity.

  16. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  17. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  18. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D P; Ritts, W D; Wharton, S

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less

  19. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  20. Comparison of fan beam, slit-slat and multi-pinhole collimators for molecular breast tomosynthesis.

    PubMed

    van Roosmalen, Jarno; Beekman, Freek J; Goorden, Marlies C

    2018-05-16

    Recently, we proposed and optimized dedicated multi-pinhole molecular breast tomosynthesis (MBT) that images a lightly compressed breast. As MBT may also be performed with other types of collimators, the aim of this paper is to optimize MBT with fan beam and slit-slat collimators and to compare its performance to that of multi-pinhole MBT to arrive at a truly optimized design. Using analytical expressions, we first optimized fan beam and slit-slat collimator parameters to reach maximum sensitivity at a series of given system resolutions. Additionally, we performed full system simulations of a breast phantom containing several tumours for the optimized designs. We found that at equal system resolution the maximum achievable sensitivity increases from pinhole to slit-slat to fan beam collimation with fan beam and slit-slat MBT having on average a 48% and 20% higher sensitivity than multi-pinhole MBT. Furthermore, by inspecting simulated images and applying a tumour-to-background contrast-to-noise (TB-CNR) analysis, we found that slit-slat collimators underperform with respect to the other collimator types. The fan beam collimators obtained a similar TB-CNR as the pinhole collimators, but the optimum was reached at different system resolutions. For fan beam collimators, a 6-8 mm system resolution was optimal in terms of TB-CNR, while with pinhole collimation highest TB-CNR was reached in the 7-10 mm range.

  1. [Optimal cut-point of salivary cotinine concentration to discriminate smoking status in the adult population in Barcelona].

    PubMed

    Martínez-Sánchez, Jose M; Fu, Marcela; Ariza, Carles; López, María J; Saltó, Esteve; Pascual, José A; Schiaffino, Anna; Borràs, Josep M; Peris, Mercè; Agudo, Antonio; Nebot, Manel; Fernández, Esteve

    2009-01-01

    To assess the optimal cut-point for salivary cotinine concentration to identify smoking status in the adult population of Barcelona. We performed a cross-sectional study of a representative sample (n=1,117) of the adult population (>16 years) in Barcelona (2004-2005). This study gathered information on active and passive smoking by means of a questionnaire and a saliva sample for cotinine determination. We analyzed sensitivity and specificity according to sex, age, smoking status (daily and occasional), and exposure to second-hand smoke at home. ROC curves and the area under the curve were calculated. The prevalence of smokers (daily and occasional) was 27.8% (95% CI: 25.2-30.4%). The optimal cut-point to discriminate smoking status was 9.2 ng/ml (sensitivity=88.7% and specificity=89.0%). The area under the ROC curve was 0.952. The optimal cut-point was 12.2 ng/ml in men and 7.6 ng/ml in women. The optimal cut-point was higher at ages with a greater prevalence of smoking. Daily smokers had a higher cut-point than occasional smokers. The optimal cut-point to discriminate smoking status in the adult population is 9.2 ng/ml, with sensitivities and specificities around 90%. The cut-point was higher in men and in younger people. The cut-point increases with higher prevalence of daily smokers.

  2. Optimization of Micro-Spec, an Ultra-Compact High-Performance Spectrometer for Far-Infrared Astronomy

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Moseley, S. H.; Wollack, E.; Hsieh, W.; Huang, W.; Stevenson, T.

    2013-06-01

    Micro-Spec (µ-Spec) is a high-sensitivity direct-detection spectrometer operating in the far-infrared and submillimeter regime. When combined with a cryogenic telescope, it provides an enabling technology for studying the epoch of reionization and initial galaxy formation. As a direct-detection spectrometer, µ-Spec can provide high sensitivity under the low background conditions provided by cryogenic telescopes such as the space infrared telescope for cosmology and astrophysics SPICA. The µ-Spec modules use low-loss superconducting microstrip transmission lines implemented on a single 4-inch-diameter wafer. Such a dramatic size reduction is enabled by the use of silicon, a material with an index of refraction about three times that of vacuum, which thus allows the microstrip lines to be one third their vacuum length. Using a large number of modules as well as reducing the negative effects of stray light also contributes positively to the enhanced sensitivity of such an instrument. µ-Spec can be compared to a grating spectrometer, in which the phase retardation generated by the reflection from the grating grooves is instead produced by propagation through transmission lines of different length. The µ-Spec optical design is based on the stigmatization and minimization of the light path function in a two-dimensional diffractive region. The power collected through a broadband antenna is progressively divided by binary microstrip power dividers. The position of the radiators is selected to provide zero phase errors at two stigmatic points, and a third stigmatic point is generated by introducing a differential phase shift in each radiator. To optimize the overall efficiency of the instrument, the emitters are directed to the center of the focal surface. A point design was developed for initial demonstration. Because of losses to other diffraction orders, the efficiency of the design presented is about 30%. Design variations on this implementation are illustrated which can lead to near-unit efficiency and will be the basis of future instruments. Measurements are being conducted to validate the designs.

  3. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  4. Particle swarm optimization of the sensitivity of a cryogenic gravitational wave detector

    NASA Astrophysics Data System (ADS)

    Michimura, Yuta; Komori, Kentaro; Nishizawa, Atsushi; Takeda, Hiroki; Nagano, Koji; Enomoto, Yutaro; Hayama, Kazuhiro; Somiya, Kentaro; Ando, Masaki

    2018-06-01

    Cryogenic cooling of the test masses of interferometric gravitational wave detectors is a promising way to reduce thermal noise. However, cryogenic cooling limits the incident power to the test masses, which limits the freedom of shaping the quantum noise. Cryogenic cooling also requires short and thick suspension fibers to extract heat, which could result in the worsening of thermal noise. Therefore, careful tuning of multiple parameters is necessary in designing the sensitivity of cryogenic gravitational wave detectors. Here, we propose the use of particle swarm optimization to optimize the parameters of these detectors. We apply it for designing the sensitivity of the KAGRA detector, and show that binary neutron star inspiral range can be improved by 10%, just by retuning seven parameters of existing components. We also show that the sky localization of GW170817-like binaries can be further improved by a factor of 1.6 averaged across the sky. Our results show that particle swarm optimization is useful for designing future gravitational wave detectors with higher dimensionality in the parameter space.

  5. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  6. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Qi, E-mail: qidai@tongji.edu.cn; Institute for Advanced Study, Tongji University, 1239 Siping Road, Shanghai 200092; Key Laboratory of Ecology and Energy-saving Study of Dense Habitat

    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reportedmore » theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions.« less

  7. Rational manipulation of digital EEG: pearls and pitfalls.

    PubMed

    Seneviratne, Udaya

    2014-12-01

    The advent of digital EEG has provided greater flexibility and more opportunities in data analysis to optimize the diagnostic yield. Changing the filter settings, sensitivity, montages, and time-base are possible rational manipulations to achieve this goal. The options to use polygraphy, video, and quantification are additional useful features. Aliasing and loss of data are potential pitfalls in the use of digital EEG. This review illustrates some common clinical scenarios where rational manipulations can enhance the diagnostic EEG yield and potential pitfalls in the process.

  8. Integrated heterodyne terahertz transceiver

    DOEpatents

    Lee, Mark [Albuquerque, NM; Wanke, Michael C [Albuquerque, NM

    2009-06-23

    A heterodyne terahertz transceiver comprises a quantum cascade laser that is integrated on-chip with a Schottky diode mixer. An antenna connected to the Schottky diode receives a terahertz signal. The quantum cascade laser couples terahertz local oscillator power to the Schottky diode to mix with the received terahertz signal to provide an intermediate frequency output signal. The fully integrated transceiver optimizes power efficiency, sensitivity, compactness, and reliability. The transceiver can be used in compact, fieldable systems covering a wide variety of deployable applications not possible with existing technology.

  9. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  10. Lumped parametric model of the human ear for sound transmission.

    PubMed

    Feng, Bin; Gan, Rong Z

    2004-09-01

    A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.

  11. Effect of external viscous load on head movement

    NASA Technical Reports Server (NTRS)

    Nam, M.-H.; Lakshminarayanan, V.; Stark, L. W.

    1984-01-01

    Quantitative measurements of horizontal head rotation were obtained from normal human subjects intending to make 'time optimal' trajectories between targets. By mounting large, lightweight vanes on the head, viscous damping B, up to 15 times normal could be added to the usual mechanical load of the head. With the added viscosity, the head trajectory was slowed and of larger duration (as expected) since fixed and maximal (for that amplitude) muscle forces had to accelerate the added viscous load. This decreased acceleration and velocity and longer duration movement still ensued in spite of adaptive compensation; this provided evidence that quasi-'time optimal' movements do indeed employ maximal muscle forces. The adaptation to this added load was rapid. Then the 'adapted state' subjects produced changed trajectories. The adaptation depended in part on the differing detailed instructions given to the subjects. This differential adaptation provided evidence for the existence of preprogrammed controller signals, sensitive to intended criterion, and neurologically ballistic or open loop rather than modified by feedback from proprioceptors or vision.

  12. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  13. A quartz-based micro catalytic methane sensor by high resolution screen printing

    NASA Astrophysics Data System (ADS)

    Lu, Wenshuai; Jing, Gaoshan; Bian, Xiaomeng; Yu, Hongyan; Cui, Tianhong

    2016-02-01

    A micro catalytic methane sensor was proposed and fabricated on a bulk fused quartz substrate using a high resolution screen printing technique for the first time, with reduced power consumption and optimized sensitivity. The sensor was designed by the finite element method and quartz was chosen as the substrate material and alumina support with optimized dimensions. Fabrication of the sensor consisted of two MEMS processes, lift-off and high resolution screen printing, with the advantages of high yield and uniformity. When the sensor’s regional working temperature changes from 250 °C to 470 °C, its sensitivity increases, as well as the power consumption. The highest sensitivity can reach 1.52 mV/% CH4. A temperature of 300 °C was chosen as the optimized working temperature, and the sensor’s sensitivity, power consumption, nonlinearity and response time are 0.77 mV/% CH4, 415 mW, 2.6%, and 35 s, respectively. This simple, but highly uniform fabrication process and the reliable performance of this sensor may lead to wide applications for methane detection.

  14. Review of SPECT collimator selection, optimization, and fabrication for clinical and preclinical imaging

    PubMed Central

    Van Audenhaege, Karen; Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Metzler, Scott D.; Moore, Stephen C.

    2015-01-01

    In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications. PMID:26233207

  15. Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques

    PubMed Central

    Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong

    2012-01-01

    Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694

  16. Sensitive and reliable multianalyte quantitation of herbal medicine in rat plasma using dynamic triggered multiple reaction monitoring.

    PubMed

    Yan, Zhixiang; Li, Tianxue; Lv, Pin; Li, Xiang; Zhou, Chen; Yang, Xinghao

    2013-06-01

    There is a growing need both clinically and experimentally to improve the determination of the blood levels of multiple chemical constituents in herbal medicines. The conventional multiple reaction monitoring (cMRM), however, is not well suited for multi-component determination and could not provide qualitative information for identity confirmation. Here we apply a dynamic triggered MRM (DtMRM) algorithm for the quantification of 20 constituents in an herbal prescription Bu-Zhong-Yi-Qi-Tang (BZYQT) in rat plasma. Dynamic MRM (DMRM) dramatically reduced the number of concurrent MRM transitions that are monitored during each MS scan. This advantage has been enhanced with the addition of triggered MRM (tMRM) for simultaneous confirmation, which maximizes the dwell time in the primary MRM quantitation phase, and also acquires sufficient MRM data to create a composite product ion spectrum. By allowing optimized collision energy for each product ion and maximizing dwell times, tMRM is significantly more sensitive and reliable than conventional product ion scanning. The DtMRM approach provides much higher sensitivity and reproducibility than cMRM. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Diagnosis of uterine cervix cancer using Müller polarimetry: a comparison with histopathology

    NASA Astrophysics Data System (ADS)

    Rehbinder, Jean; Deby, Stanislas; Haddad, Huda; Teig, Benjamin; Nazac, André; Pierangelo, Angelo; Moreau, François

    2015-07-01

    Today around 275000 women a year in the world keep dying from the cancer of uterine cervix due to the difficulty to meet the logistic requirements of an organized screening in the developing world. Polarimetric imaging is a new promising technique with a tremendous potential for applications in biomedical diagnostics: it is sensitive to slight morphological changes in tissues, can provide wide field images for the screening and requires light sources such as a LED for example. This work intends to characterize the polarimetric response of the uterine cervix in its healthy and pathological states. An extensive series of ex-vivo measurements is in progress the Kremlin Bicêtre hospital near Paris using an imaging multispectral Mueller polarimeter in backscattering configuration. The goal of this study is to evaluate the performances of polarimetric imaging technique in terms of sensitivity and specificity for the detection of healthy epithelia (Healthy Squamous epithelium and Malpighian Metaplasia) with respect to the diagnosis provided by pathologists from histology slides as the "gold standard". We show that, at λ=550nm, performances as high as 62% sensitivity and 64% specificity are achieved by optimizing a simple threshold on the scalar retardance values.

  18. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    PubMed Central

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595

  19. Airway hyperresponsiveness to methacholine in 7-year-old children: sensitivity and specificity for pediatric allergist-diagnosed asthma.

    PubMed

    Carlsten, Chris; Dimich-Ward, Helen; Ferguson, Alexander; Becker, Allan; Dybuncio, Anne; Chan-Yeung, Moira

    2011-02-01

    The operating characteristics of PC(20) values used as cut-offs to define airway hyperresponsiveness, as it informs the diagnosis of asthma in children, are poorly understood. We examine data from a unique cohort to inform this concern. Determine the sensitivity and specificity of incremental PC(20) cut-offs for allergist-diagnosed asthma. Airway reactivity at age 7 was assessed in children within a birth cohort at high risk for asthma; PC(20) for methacholine was determined by standard technique including interpolation. The diagnosis of asthma was considered by the pediatric allergist without knowledge of the methacholine challenge results. Sensitivity and specificity were calculated using a cross-tabulation of asthma diagnosis with incremental PC(20) cut-off values, from 1.0 to 8.0 mg/ml, and plotted as receiver operator characteristic (ROC) curves. The "optimal" cut-off was defined as that PC(20) conferring maximal value for sensitivity plus specificity while the "balanced" cut-off was defined as that PC(20) at which sensitivity and specificity were most equal. 70/348 children (20.1%) were diagnosed with asthma. The optimal and balanced PC(20) cut-offs, both for all children and for females alone, were respectively 3 mg/ml (sensitivity 80.0%, specificity 49.1%) and 2 mg/ml (sensitivity 63.1%, specificity 64.7%). For males alone, the "optimal" and "balanced" PC(20) cut-offs were both 2 mg/ml. For this cohort of 7-year olds at high risk for asthma, methacholine challenge testing using a cut-off value of PC(20) 3 mg/ml conferred the maximal sum of specificity plus sensitivity. For contexts in which higher sensitivity or specificity is desired, other cut-offs may be preferred. Copyright © 2011 Wiley-Liss, Inc.

  20. Impact of test sensitivity and specificity on pig producer incentives to control Mycobacterium avium infections in finishing pigs.

    PubMed

    van Wagenberg, Coen P A; Backus, Gé B C; Wisselink, Henk J; van der Vorst, Jack G A J; Urlings, Bert A P

    2013-09-01

    In this paper we analyze the impact of the sensitivity and specificity of a Mycobacterium avium (Ma) test on pig producer incentives to control Ma in finishing pigs. A possible Ma control system which includes a serodiagnostic test and a penalty on finishing pigs in herds detected with Ma infection was modelled. Using a dynamic optimization model and a grid search of deliveries of herds from pig producers to slaughterhouse, optimal control measures for pig producers and optimal penalty values for deliveries with increased Ma risk were identified for different sensitivity and specificity values. Results showed that higher sensitivity and lower specificity induced use of more intense control measures and resulted in higher pig producer costs and lower Ma seroprevalence. The minimal penalty value needed to comply with a threshold for Ma seroprevalence in finishing pigs at slaughter was lower at higher sensitivity and lower specificity. With imperfect specificity a larger sample size decreased pig producer incentives to control Ma seroprevalence, because the higher number of false positives resulted in an increased probability of rejecting a batch of finishing pigs irrespective of whether the pig producer applied control measures. We conclude that test sensitivity and specificity must be considered in incentive system design to induce pig producers to control Ma in finishing pigs with minimum negative effects. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. SAFARI new and improved: extending the capabilities of SPICA's imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Roelfsema, Peter; Giard, Martin; Najarro, Francisco; Wafelbakker, Kees; Jellema, Willem; Jackson, Brian; Sibthorpe, Bruce; Audard, Marc; Doi, Yasuo; di Giorgio, Anna; Griffin, Matthew; Helmich, Frank; Kamp, Inga; Kerschbaum, Franz; Meyer, Michael; Naylor, David; Onaka, Takashi; Poglitch, Albrecht; Spinoglio, Luigi; van der Tak, Floris; Vandenbussche, Bart

    2014-08-01

    The Japanese SPace Infrared telescope for Cosmology and Astrophysics, SPICA, aims to provide astronomers with a truly new window on the universe. With a large -3 meter class- cold -6K- telescope, the mission provides a unique low background environment optimally suited for highly sensitive instruments limited only by the cosmic background itself. SAFARI, the SpicA FAR infrared Instrument SAFARI, is a Fourier Transform imaging spectrometer designed to fully exploit this extremely low far infrared background environment. The SAFARI consortium, comprised of European and Canadian institutes, has established an instrument reference design based on a Mach-Zehnder interferometer stage with outputs directed to three extremely sensitive Transition Edge Sensor arrays covering the 35 to 210 μm domain. The baseline instrument provides R > 1000 spectroscopic imaging capabilities over a 2' by 2' field of view. A number of modifications to the instrument to extend its capabilities are under investigation. With the reference design SAFARI's sensitivity for many objects is limited not only by the detector NEP but also by the level of broad band background radiation - the zodiacal light for the shorter wavelengths and satellite baffle structures for the longer wavelengths. Options to reduce this background are dedicated masks or dispersive elements which can be inserted in the optics as required. The resulting increase in sensitivity can directly enhance the prime science goals of SAFARI; with the expected enhanced sensitivity astronomers would be in a better position to study thousands of galaxies out to redshift 3 and even many hundreds out to redshifts of 5 or 6. Possibilities to increase the wavelength resolution, at least for the shorter wavelength bands, are investigated as this would significantly enhance SAFARI's capabilities to study star and planet formation in our own galaxy.

  2. Thick tissue diffusion model with binding to optimize topical staining in fluorescence breast cancer margin imaging

    NASA Astrophysics Data System (ADS)

    Xu, Xiaochun; Kang, Soyoung; Navarro-Comes, Eric; Wang, Yu; Liu, Jonathan T. C.; Tichauer, Kenneth M.

    2018-03-01

    Intraoperative tumor/surgical margin assessment is required to achieve higher tumor resection rate in breast-conserving surgery. Though current histology provides incomparable accuracy in margin assessment, thin tissue sectioning and the limited field of view of microscopy makes histology too time-consuming for intraoperative applications. If thick tissue, wide-field imaging can provide an acceptable assessment of tumor cells at the surface of resected tissues, an intraoperative protocol can be developed to guide the surgery and provide immediate feedback for surgeons. Topical staining of margins with cancer-targeted molecular imaging agents has the potential to provide the sensitivity needed to see microscopic cancer on a wide-field image; however, diffusion and nonspecific retention of imaging agents in thick tissue can significantly diminish tumor contrast with conventional methods. Here, we present a mathematical model to accurately simulate nonspecific retention, binding, and diffusion of imaging agents in thick tissue topical staining to guide and optimize future thick tissue staining and imaging protocol. In order to verify the accuracy and applicability of the model, diffusion profiles of cancer targeted and untargeted (control) nanoparticles at different staining times in A431 tumor xenografts were acquired for model comparison and tuning. The initial findings suggest the existence of nonspecific retention in the tissue, especially at the tissue surface. The simulator can be used to compare the effect of nonspecific retention, receptor binding and diffusion under various conditions (tissue type, imaging agent) and provides optimal staining and imaging protocols for targeted and control imaging agent.

  3. Multiband phase-modulated radio over IsOWC link with balanced coherent homodyne detection

    NASA Astrophysics Data System (ADS)

    Zong, Kang; Zhu, Jiang

    2017-11-01

    In this paper, we present a multiband phase-modulated radio over intersatellite optical wireless communication (IsOWC) link with balanced coherent homodyne detection. The proposed system can provide high linearity for transparent transport of multiband radio frequency (RF) signals and better receiver sensitivity than intensity modulated with direct detection (IM/DD) system. The exact analytical expression of signal to noise and distortion ratio (SNDR) is derived considering the third-order intermodulation product and amplifier spontaneous emission (ASE) noise. Numerical results of SNDR with various number of subchannels and modulation index are given. Results indicate that the optimal modulation index exists to maximize the SNDR. With the same system parameters, the value of the optimal modulation index will decrease with the increase of number of subchannels.

  4. Optimization of SABRE for polarization of the tuberculosis drugs pyrazinamide and isoniazid.

    PubMed

    Zeng, Haifeng; Xu, Jiadi; Gillen, Joseph; McMahon, Michael T; Artemov, Dmitri; Tyburn, Jean-Max; Lohman, Joost A B; Mewis, Ryan E; Atkinson, Kevin D; Green, Gary G R; Duckett, Simon B; van Zijl, Peter C M

    2013-12-01

    Hyperpolarization produces nuclear spin polarization that is several orders of magnitude larger than that achieved at thermal equilibrium thus providing extraordinary contrast and sensitivity. As a parahydrogen induced polarization (PHIP) technique that does not require chemical modification of the substrate to polarize, Signal Amplification by Reversible Exchange (SABRE) has attracted a lot of attention. Using a prototype parahydrogen polarizer, we polarize two drugs used in the treatment of tuberculosis, namely pyrazinamide and isoniazid. We examine this approach in four solvents, methanol-d4, methanol, ethanol and DMSO and optimize the polarization transfer magnetic field strength, the temperature as well as intensity and duration of hydrogen bubbling to achieve the best overall signal enhancement and hence hyperpolarization level. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. High Sensitivity Optically Pumped Quantum Magnetometer

    PubMed Central

    Tiporlini, Valentina; Alameh, Kamal

    2013-01-01

    Quantum magnetometers based on optical pumping can achieve sensitivity as high as what SQUID-based devices can attain. In this paper, we discuss the principle of operation and the optimal design of an optically pumped quantum magnetometer. The ultimate intrinsic sensitivity is calculated showing that optimal performance of the magnetometer is attained with an optical pump power of 20 μW and an operation temperature of 48°C. Results show that the ultimate intrinsic sensitivity of the quantum magnetometer that can be achieved is 327 fT/Hz1/2 over a bandwidth of 26 Hz and that this sensitivity drops to 130 pT/Hz1/2 in the presence of environmental noise. The quantum magnetometer is shown to be capable of detecting a sinusoidal magnetic field of amplitude as low as 15 pT oscillating at 25 Hz. PMID:23766716

  6. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers.

    PubMed

    Gather, Malte C; Yun, Seok Hyun

    2014-12-08

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm(-1); 96 dB cm(-1)). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  7. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    PubMed Central

    Gather, Malte C.; Yun, Seok Hyun

    2015-01-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (−7 dB) and support strong optical amplification (gnet = 22 cm−1; 96 dB cm−1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles. PMID:25483850

  8. Optimal order policy in response to announced price increase for deteriorating items with limited special order quantity

    NASA Astrophysics Data System (ADS)

    Ouyang, Liang-Yuh; Wu, Kun-Shan; Yang, Chih-Te; Yen, Hsiu-Feng

    2016-02-01

    When a supplier announces an impending price increase due to take effect at a certain time in the future, it is important for each retailer to decide whether to purchase additional stock to take advantage of the present lower price. This study explores the possible effects of price increases on a retailer's replenishment policy when the special order quantity is limited and the rate of deterioration of the goods is assumed to be constant. The two situations discussed in this study are as follows: (1) when the special order time coincides with the retailer's replenishment time and (2) when the special order time occurs during the retailer's sales period. By analysing the total cost savings between special and regular orders during the depletion time of the special order quantity, the optimal order policy for each situation can be determined. We provide several numerical examples to illustrate the theories in practice. Additionally, we conduct a sensitivity analysis on the optimal solution with respect to the main parameters.

  9. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  10. Shape design sensitivity analysis and optimization of three dimensional elastic solids using geometric modeling and automatic regridding. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yao, Tse-Min; Choi, Kyung K.

    1987-01-01

    An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.

  11. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  12. Dynamic tuning of chemiresistor sensitivity using mechanical strain

    DOEpatents

    Martin, James E; Read, Douglas H

    2014-09-30

    The sensitivity of a chemiresistor sensor can be dynamically tuned using mechanical strain. The increase in sensitivity is a smooth, continuous function of the applied strain, and the effect can be reversible. Sensitivity tuning enables the response curve of the sensor to be dynamically optimized for sensing analytes, such as volatile organic compounds, over a wide concentration range.

  13. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  14. Carbon nanohorn sensitized electrochemical immunosensor for rapid detection of microcystin-LR.

    PubMed

    Zhang, Jing; Lei, Jianping; Xu, Chuanlai; Ding, Lin; Ju, Huangxian

    2010-02-01

    A sensitive electrochemical immunosensor was proposed by functionalizing single-walled carbon nanohorns (SWNHs) with analyte for microcystin-LR (MC-LR) detection. The functionalization of SWNHs was performed by covalently binding MC-LR to the abundant carboxylic groups on the cone-shaped tips of SWNHs in the presence of linkage reagents and characterized with Raman spectroscopy, X-ray photoelectron spectroscopy, scanning electron microscopy, and a transmission electron micrograph. Compared with single-walled carbon nanotubes, SWNHs as immobilization matrixes showed a better sensitizing effect. Using home-prepared horseradish peroxidase-labeled MC-LR antibody for the competitive immunoassay, under optimal conditions, the immunosensor exhibited a wide linear response to MC-LR ranging from 0.05 to 20 microg/L with a detection limit of 0.03 microg/L at a signal-to-noise of 3. This method showed good accuracy, acceptable precision, and reproducibility. The assay results of MC-LR in polluted water were in a good agreement with the reference values. The proposed strategy provided a biocompatible immobilization and sensitized recognition platform for analytes as small antigens and possessed promising application in food and environmental monitoring.

  15. CIP (cleaning-in-place) stability of AlGaN/GaN pH sensors.

    PubMed

    Linkohr, St; Pletschen, W; Schwarz, S U; Anzt, J; Cimalla, V; Ambacher, O

    2013-02-20

    The CIP stability of pH sensitive ion-sensitive field-effect transistors based on AlGaN/GaN heterostructures was investigated. For epitaxial AlGaN/GaN films with high structural quality, CIP tests did not degrade the sensor surface and pH sensitivities of 55-58 mV/pH were achieved. Several different passivation schemes based on SiO(x), SiN(x), AlN, and nanocrystalline diamond were compared with special attention given to compatibility to standard microelectronic device technologies as well as biocompatibility of the passivation films. The CIP stability was evaluated with a main focus on the morphological stability. All stacks containing a SiO₂ or an AlN layer were etched by the NaOH solution in the CIP process. Reliable passivations withstanding the NaOH solution were provided by stacks of ICP-CVD grown and sputtered SiN(x) as well as diamond reinforced passivations. Drift levels about 0.001 pH/h and stable sensitivity over several CIP cycles were achieved for optimized sensor structures. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Insight into D-A-π-A Structured Sensitizers: A Promising Route to Highly Efficient and Stable Dye-Sensitized Solar Cells.

    PubMed

    Wu, Yongzhen; Zhu, Wei-Hong; Zakeeruddin, Shaik M; Grätzel, Michael

    2015-05-13

    The dye-sensitized solar cell (DSSC) is one of the most promising photovoltaic technologies with potential of low cost, light weight, and good flexibility. The practical application of DSSCs requires further improvement in power conversion efficiency and long-term stability. Recently, significant progress has been witnessed in DSSC research owing to the novel concept of the D-A-π-A motif for the molecular engineering of organic photosensitizers. New organic and porphyrin dyes based on the D-A-π-A motif can not only enhance photovoltaic performance, but also improve durability in DSSC applications. This Spotlight on Applications highlights recent advances in the D-A-π-A-based photosensitizers, specifically focusing on the mechanism of efficiency and stability enhancements. Also, we find insight into the additional acceptor as well as the trade-off of long wavelength response. The basic principles are involved in molecular engineering of efficient D-A-π-A sensitizers, providing a clear road map showing how to modulate the energy bands, rationally extending the response wavelength, and optimizing photovoltaic efficiency step by step.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  18. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  19. Flexible Measurement of Bioluminescent Reporters Using an Automated Longitudinal Luciferase Imaging Gas- and Temperature-optimized Recorder (ALLIGATOR).

    PubMed

    Crosby, Priya; Hoyle, Nathaniel P; O'Neill, John S

    2017-12-13

    Luciferase-based reporters of cellular gene expression are in widespread use for both longitudinal and end-point assays of biological activity. In circadian rhythms research, for example, clock gene fusions with firefly luciferase give rise to robust rhythms in cellular bioluminescence that persist over many days. Technical limitations associated with photomultiplier tubes (PMT) or conventional microscopy-based methods for bioluminescence quantification have typically demanded that cells and tissues be maintained under quite non-physiological conditions during recording, with a trade-off between sensitivity and throughput. Here, we report a refinement of prior methods that allows long-term bioluminescence imaging with high sensitivity and throughput which supports a broad range of culture conditions, including variable gas and humidity control, and that accepts many different tissue culture plates and dishes. This automated longitudinal luciferase imaging gas- and temperature-optimized recorder (ALLIGATOR) also allows the observation of spatial variations in luciferase expression across a cell monolayer or tissue, which cannot readily be observed by traditional methods. We highlight how the ALLIGATOR provides vastly increased flexibility for the detection of luciferase activity when compared with existing methods.

  20. Enantiomeric separation of fluoxetine and norfluoxetine in plasma and serum samples with high detection sensitivity capillary electrophoresis.

    PubMed

    Desiderio, C; Rudaz, S; Raggi, M A; Fanali, S

    1999-11-01

    A capillary electrophoresis method was optimized for the stereoselective analysis of the antidepressant drug fluoxetine and its main demethylated metabolite norfluoxetine using a cyclodextrin-modified sodium phosphate buffer at pH 2.5. The combination of a neutral and a negatively charged cyclodextrin, dimethylated-beta- and phosphated-gamma-respectively, provided the baseline enantiomeric separation of the two compounds. The very low concentrations of chiral selectors employed together with the use of a high sensitivity detection cell of special design (zeta-shaped) in a diode array UV detector allowed us to reach a limit of detection of 0.005 and 0.01 microg/mL for fluoxetine and norfluoxetine, respectively. Analysis of fluoxetine and norfluoxetine standard mixtures showed a reproducibility of migration times and peak area and linearity in the concentration range of 0.1-2.0 microg/mL. The optimized method was applied to the analysis of clinical serum and plasma samples of patients under depression therapy. In all the analyzed samples the enantiomeric forms of fluoxetine and norfluoxetine were easily identified. The fluoxetine and metabolite enantiomeric ratio confirmed the stereoselectivity of the metabolic process of the fluoxetine drug in accordance with the literature data.

  1. High efficiency and stability of quasi-solid-state dye-sensitized ZnO solar cells using graphene incorporated soluble polystyrene gel electrolytes

    NASA Astrophysics Data System (ADS)

    Bi, Shi-Qing; Meng, Fan-Li; Zheng, Yan-Zhen; Han, Xue; Tao, Xia; Chen, Jian-Feng

    2014-12-01

    We report on the preparation of highly effective composite electrolytes by combining the two-dimensional graphene (Gra) and soluble polystyrene (PS) nanobeads on Pt counter electrode for the quasi-solid-state electrolytes of ZnO based dye-sensitized solar cells (DSCs). Under an optimized Gra/electrolyte ratio of 12 mg mL-1, the ionic conductivity (σ) of Gra-PS electrolyte was significantly improved from 32.8 mS cm-1 to 39.8 mS cm-1. And the electrochemical impedance spectroscopy (EIS) analysis proved that the ZnO-DSC with the optimized composite electrolyte possessed the lowest impedance value. As a result, the overall power conversion efficiencies (PCEs) of quasi-solid-state ZnO-DSCs significantly enhanced to 5.08% from initial 4.09%. Moreover, the results of long-term stability assays showed that the gel-state Gra-PS ZnO-DSC could retain over 90% of its initial PCE after radiation of 1000 h under full sunlight outdoors. It is anticipated that this work may provide an effective way to increase the cell efficiency by the introduction of Gra into gel electrolyte as well as a great potential for practical application.

  2. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  3. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  4. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  5. Development and application of optimum sensitivity analysis of structures

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.; Hallauer, W. L., Jr.

    1984-01-01

    The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.

  6. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  7. Polymer nanocomposite nanomechanical cantilever sensors: material characterization, device development and application in explosive vapour detection.

    PubMed

    Seena, V; Fernandes, Avil; Pant, Prita; Mukherji, Soumyo; Rao, V Ramgopal

    2011-07-22

    This paper reports an optimized and highly sensitive piezoresistive SU-8 nanocomposite microcantilever sensor and its application for detection of explosives in vapour phase. The optimization has been in improving its electrical, mechanical and transduction characteristics. We have achieved a better dispersion of carbon black (CB) in the SU-8/CB nanocomposite piezoresistor and arrived at an optimal range of 8-9 vol% CB concentration by performing a systematic mechanical and electrical characterization of polymer nanocomposites. Mechanical characterization of SU-8/CB nanocomposite thin films was performed using the nanoindentation technique with an appropriate substrate effect analysis. Piezoresistive microcantilevers having an optimum carbon black concentration were fabricated using a design aimed at surface stress measurements with reduced fabrication process complexity. The optimal range of 8-9 vol% CB concentration has resulted in an improved sensitivity, low device variability and low noise level. The resonant frequency and spring constant of the microcantilever were found to be 22 kHz and 0.4 N m(-1) respectively. The devices exhibited a surface stress sensitivity of 7.6 ppm (mN m(-1))(-1) and the noise characterization results support their suitability for biochemical sensing applications. This paper also reports the ability of the sensor in detecting TNT vapour concentration down to less than six parts per billion with a sensitivity of 1 mV/ppb.

  8. MMASS: an optimized array-based method for assessing CpG island methylation.

    PubMed

    Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D

    2006-01-01

    We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.

  9. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  10. Synthetic Minority Oversampling Technique and Fractal Dimension for Identifying Multiple Sclerosis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Zhang, Yin; Phillips, Preetha; Dong, Zhengchao; Wang, Shuihua

    Multiple sclerosis (MS) is a severe brain disease. Early detection can provide timely treatment. Fractal dimension can provide statistical index of pattern changes with scale at a given brain image. In this study, our team used susceptibility weighted imaging technique to obtain 676 MS slices and 880 healthy slices. We used synthetic minority oversampling technique to process the unbalanced dataset. Then, we used Canny edge detector to extract distinguishing edges. The Minkowski-Bouligand dimension was a fractal dimension estimation method and used to extract features from edges. Single hidden layer neural network was used as the classifier. Finally, we proposed a three-segment representation biogeography-based optimization to train the classifier. Our method achieved a sensitivity of 97.78±1.29%, a specificity of 97.82±1.60% and an accuracy of 97.80±1.40%. The proposed method is superior to seven state-of-the-art methods in terms of sensitivity and accuracy.

  11. Monte Carlo Modeling-Based Digital Loop-Mediated Isothermal Amplification on a Spiral Chip for Absolute Quantification of Nucleic Acids.

    PubMed

    Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng

    2017-03-21

    Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.

  12. Study of node and mass sensitivity of resonant mode based cantilevers with concentrated mass loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kewei, E-mail: drzkw@126.com; Chai, Yuesheng; Fu, Jiahui

    2015-12-15

    Resonant-mode based cantilevers are an important type of acoustic wave based mass-sensing devices. In this work, the governing vibration equation of a bi-layer resonant-mode based cantilever attached with concentrated mass is established by using a modal analysis method. The effects of resonance modes and mass loading conditions on nodes and mass sensitivity of the cantilever were theoretically studied. The results suggested that the node did not shift when concentrated mass was loaded on a specific position. Mass sensitivity of the cantilever was linearly proportional to the square of the point displacement at the mass loading position for all the resonancemore » modes. For the first resonance mode, when mass loading position x{sub c} satisfied 0 < x{sub c} < ∼ 0.3l (l is the cantilever beam length and 0 represents the rigid end), mass sensitivity decreased as the mass increasing while the opposite trend was obtained when mass loading satisfied ∼0.3l ≤ x{sub c} ≤ l. Mass sensitivity did not change when concentrated mass was loaded at the rigid end. This work can provide scientific guidance to optimize the mass sensitivity of a resonant-mode based cantilever.« less

  13. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  14. SU-F-19A-08: Optimal Time Release Schedule of In-Situ Drug Release During Permanent Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormack, R; Ngwa, W; Makrigiorgos, G

    Purpose: Permanent prostate brachytherapy spacers can be used to deliver sustained doses of radiosentitizing drug directly to the target, in order to enhance the radiation effect. Implantable nanoplatforms for chemo-radiation therapy (INCeRTs) have a maximum drug capacity and can be engineered to control the drug release schedule. The optimal schedule for sensitization during continuous low dose rate irradiation is unknown. This work studies the optimal release schedule of drug for both traditional sensitizers, and those that work by suppressing DNA repair processes. Methods: Six brachytherapy treatment plans were used to model the anatomy, implant geometry and calculate the spatial distributionmore » of radiation dose and drug concentrations for a range of drug diffusion parameters. Three state partial differential equations (cells healthy, damaged or dead) modeled the effect of continuous radiation (radiosensitivities α,β) and cellular repair (time tr) on a cell population. Radiosensitization was modeled as concentration dependent change in α,β or tr which with variable duration under the constraint of fixed total drug release. Average cell kill was used to measure effectiveness. Sensitization by means of both enhanced damage and reduced repair were studied. Results: Optimal release duration is dependent on the concentration of radiosensitizer compared to the saturation concentration (csat) above which additional sensitization does not occur. Long duration drug release when enhancing α or β maximizes cell death when drug concentrations are generally over csat. Short term release is optimal for concentrations below saturation. Sensitization by suppressing repair has a similar though less distinct trend that is more affected by the radiation dose distribution. Conclusion: Models of sustained local radiosensitization show potential to increase the effectiveness of radiation in permanent prostate brachytherapy. INCeRTs with high drug capacity produce the greatest benefit with drug release over weeks. If in-vivo drug concentrations are not able to approach saturation concentration, durations of days is optimal. DOD 1R21CA16977501; A. David Mazzone Awards Program 2012PD164.« less

  15. A Biomarker Combining Imaging and Neuropsychological Assessment for Tracking Early Alzheimer's Disease in Clinical Trials.

    PubMed

    Verma, Nishant; Beretvas, S Natasha; Pascual, Belen; Masdeu, Joseph C; Markey, Mia K

    2018-03-14

    Combining optimized cognitive (Alzheimer's Disease Assessment Scale- Cognitive subscale, ADAS-Cog) and atrophy markers of Alzheimer's disease for tracking progression in clinical trials may provide greater sensitivity than currently used methods, which have yielded negative results in multiple recent trials. Furthermore, it is critical to clarify the relationship among the subcomponents yielded by cognitive and imaging testing, to address the symptomatic and anatomical variability of Alzheimer's disease. Using latent variable analysis, we thoroughly investigated the relationship between cognitive impairment, as assessed on the ADAS-Cog, and cerebral atrophy. A biomarker was developed for Alzheimer's clinical trials that combines cognitive and atrophy markers. Atrophy within specific brain regions was found to be closely related with impairment in cognitive domains of memory, language, and praxis. The proposed biomarker showed significantly better sensitivity in tracking progression of cognitive impairment than the ADAS-Cog in simulated trials and a real world problem. The biomarker also improved the selection of MCI patients (78.8±4.9% specificity at 80% sensitivity) that will evolve to Alzheimer's disease for clinical trials. The proposed biomarker provides a boost to the efficacy of clinical trials focused in the mild cognitive impairment (MCI) stage by significantly improving the sensitivity to detect treatment effects and improving the selection of MCI patients that will evolve to Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Design of coherent receiver optical front end for unamplified applications.

    PubMed

    Zhang, Bo; Malouin, Christian; Schmidt, Theodore J

    2012-01-30

    Advanced modulation schemes together with coherent detection and digital signal processing has enabled the next generation high-bandwidth optical communication systems. One of the key advantages of coherent detection is its superior receiver sensitivity compared to direct detection receivers due to the gain provided by the local oscillator (LO). In unamplified applications, such as metro and edge networks, the ultimate receiver sensitivity is dictated by the amount of shot noise, thermal noise, and the residual beating of the local oscillator with relative intensity noise (LO-RIN). We show that the best sensitivity is achieved when the thermal noise is balanced with the residual LO-RIN beat noise, which results in an optimum LO power. The impact of thermal noise from the transimpedance amplifier (TIA), the RIN from the LO, and the common mode rejection ratio (CMRR) from a balanced photodiode are individually analyzed via analytical models and compared to numerical simulations. The analytical model results match well with those of the numerical simulations, providing a simplified method to quantify the impact of receiver design tradeoffs. For a practical 100 Gb/s integrated coherent receiver with 7% FEC overhead, we show that an optimum receiver sensitivity of -33 dBm can be achieved at GFEC cliff of 8.55E-5 if the LO power is optimized at 11 dBm. We also discuss a potential method to monitor the imperfections of a balanced and integrated coherent receiver.

  17. Meshless methods in shape optimization of linear elastic and thermoelastic solids

    NASA Astrophysics Data System (ADS)

    Bobaru, Florin

    This dissertation proposes a meshless approach to problems in shape optimization of elastic and thermoelastic solids. The Element-free Galerkin (EFG) method is used for this purpose. The ability of the EFG to avoid remeshing, that is normally done in a Finite Element approach to correct highly distorted meshes, is clearly demonstrated by several examples. The shape optimization example of a thermal cooling fin shows a dramatic improvement in the objective compared to a previous FEM analysis. More importantly, the new solution, displaying large shape changes contrasted to the initial design, was completely missed by the FEM analysis. The EFG formulation given here for shape optimization "uncovers" new solutions that are, apparently, unobtainable via a FEM approach. This is one of the main achievements of our work. The variational formulations for the analysis problem and for the sensitivity problems are obtained with a penalty method for imposing the displacement boundary conditions. The continuum formulation is general and this facilitates 2D and 3D with minor differences from one another. Also, transient thermoelastic problems can use the present development at each time step to solve shape optimization problems for time-dependent thermal problems. For the elasticity framework, displacement sensitivity is obtained in the EFG context. Excellent agreements with analytical solutions for some test problems are obtained. The shape optimization of a fillet is carried out in great detail, and results show significant improvement of the EFG solution over the FEM or the Boundary Element Method solutions. In our approach we avoid differentiating the complicated EFG shape functions, with respect to the shape design parameters, by using a particular discretization for sensitivity calculations. Displacement and temperature sensitivities are formulated for the shape optimization of a linear thermoelastic solid. Two important examples considered in this work, the optimization of a thermal fin and of a uniformly loaded thermoelastic beam, reveal new characteristics of the EFG method in shape optimization applications. Among other advantages of the EFG method over traditional FEM treatments of shape optimization problems, some of the most important ones are shown to be: elimination of post-processing for stress and strain recovery that directly gives more accurate results in critical positions (near the boundaries, for example) for shape optimization problems; nodes movement flexibility that permits new, better shapes (previously missed by an FEM analysis) to be discovered. Several new research directions that need further consideration are exposed.

  18. Fair Inference on Outcomes

    PubMed Central

    Nabi, Razieh; Shpitser, Ilya

    2017-01-01

    In this paper, we consider the problem of fair statistical inference involving outcome variables. Examples include classification and regression problems, and estimating treatment effects in randomized trials or observational data. The issue of fairness arises in such problems where some covariates or treatments are “sensitive,” in the sense of having potential of creating discrimination. In this paper, we argue that the presence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009). A fair outcome model can then be learned by solving a constrained optimization problem. We discuss a number of complications that arise in classical statistical inference due to this view and provide workarounds based on recent work in causal and semi-parametric inference.

  19. Magnetic resonance imaging with an optical atomic magnetometer

    PubMed Central

    Xu, Shoujun; Yashchuk, Valeriy V.; Donaldson, Marcus H.; Rochester, Simon M.; Budker, Dmitry; Pines, Alexander

    2006-01-01

    We report an approach for the detection of magnetic resonance imaging without superconducting magnets and cryogenics: optical atomic magnetometry. This technique possesses a high sensitivity independent of the strength of the static magnetic field, extending the applicability of magnetic resonance imaging to low magnetic fields and eliminating imaging artifacts associated with high fields. By coupling with a remote-detection scheme, thereby improving the filling factor of the sample, we obtained time-resolved flow images of water with a temporal resolution of 0.1 s and spatial resolutions of 1.6 mm perpendicular to the flow and 4.5 mm along the flow. Potentially inexpensive, compact, and mobile, our technique provides a viable alternative for MRI detection with substantially enhanced sensitivity and time resolution for various situations where traditional MRI is not optimal. PMID:16885210

  20. Multi-pass transmission electron microscopy

    DOE PAGES

    Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.; ...

    2017-05-10

    Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less

Top