Sample records for analytical optimization study

  1. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  2. The Framework of Intervention Engine Based on Learning Analytics

    ERIC Educational Resources Information Center

    Sahin, Muhittin; Yurdugül, Halil

    2017-01-01

    Learning analytics primarily deals with the optimization of learning environments and the ultimate goal of learning analytics is to improve learning and teaching efficiency. Studies on learning analytics seem to have been made in the form of adaptation engine and intervention engine. Adaptation engine studies are quite widespread, but intervention…

  3. Design and Optimization of AlN based RF MEMS Switches

    NASA Astrophysics Data System (ADS)

    Hasan Ziko, Mehadi; Koel, Ants

    2018-05-01

    Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.

  4. Cocontraction of pairs of antagonistic muscles: analytical solution for planar static nonlinear optimization approaches.

    PubMed

    Herzog, W; Binding, P

    1993-11-01

    It has been stated in the literature that static, nonlinear optimization approaches cannot predict coactivation of pairs of antagonistic muscles; however, numerical solutions of such approaches have predicted coactivation of pairs of one-joint and multijoint antagonists. Analytical support for either finding is not available in the literature for systems containing more than one degree of freedom. The purpose of this study was to investigate analytically the possibility of cocontraction of pairs of antagonistic muscles using a static nonlinear optimization approach for a multidegree-of-freedom, two-dimensional system. Analytical solutions were found using the Karush-Kuhn-Tucker conditions, which were necessary and sufficient for optimality in this problem. The results show that cocontraction of pairs of one-joint antagonistic muscles is not possible, whereas cocontraction of pairs of multijoint antagonists is. These findings suggest that cocontraction of pairs of antagonistic muscles may be an "efficient" way to accomplish many movement tasks.

  5. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  6. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  7. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  8. Optimism and Physical Health: A Meta-analytic Review

    PubMed Central

    Rasmussen, Heather N.; Greenhouse, Joel B.

    2010-01-01

    Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142

  9. Optimal control for Malaria disease through vaccination

    NASA Astrophysics Data System (ADS)

    Munzir, Said; Nasir, Muhammad; Ramli, Marwan

    2018-01-01

    Malaria is a disease caused by an amoeba (single-celled animal) type of plasmodium where anopheles mosquito serves as the carrier. This study examines the optimal control problem of malaria disease spread based on Aron and May (1982) SIR type models and seeks the optimal solution by minimizing the prevention of the spreading of malaria by vaccine. The aim is to investigate optimal control strategies on preventing the spread of malaria by vaccination. The problem in this research is solved using analytical approach. The analytical method uses the Pontryagin Minimum Principle with the symbolic help of MATLAB software to obtain optimal control result and to analyse the spread of malaria with vaccination control.

  10. What if Learning Analytics Were Based on Learning Science?

    ERIC Educational Resources Information Center

    Marzouk, Zahia; Rakovic, Mladen; Liaqat, Amna; Vytasek, Jovita; Samadi, Donya; Stewart-Alonso, Jason; Ram, Ilana; Woloshen, Sonya; Winne, Philip H.; Nesbit, John C.

    2016-01-01

    Learning analytics are often formatted as visualisations developed from traced data collected as students study in online learning environments. Optimal analytics inform and motivate students' decisions about adaptations that improve their learning. We observe that designs for learning often neglect theories and empirical findings in learning…

  11. Understanding Business Analytics Success and Impact: A Qualitative Study

    ERIC Educational Resources Information Center

    Parks, Rachida F.; Thambusamy, Ravi

    2017-01-01

    Business analytics is believed to be a huge boon for organizations since it helps offer timely insights over the competition, helps optimize business processes, and helps generate growth and innovation opportunities. As organizations embark on their business analytics initiatives, many strategic questions, such as how to operationalize business…

  12. Gradient Optimization for Analytic conTrols - GOAT

    NASA Astrophysics Data System (ADS)

    Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.

  13. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  14. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  15. Exergy optimization in a steady moving bed heat exchanger.

    PubMed

    Soria-Verdugo, A; Almendros-Ibáñez, J A; Ruiz-Rivas, U; Santana, D

    2009-04-01

    This work provides an energy and exergy optimization analysis of a moving bed heat exchanger (MBHE). The exchanger is studied as a cross-flow heat exchanger where one of the phases is a moving granular medium. The optimal MBHE dimensions and the optimal particle diameter are obtained for a range of incoming fluid flow rates. The analyses are carried out over operation data of the exchanger obtained in two ways: a numerical simulation of the steady-state problem and an analytical solution of the simplified equations, neglecting the conduction terms. The numerical simulation considers, for the solid, the convection heat transfer to the fluid and the diffusion term in both directions, and for the fluid only the convection heat transfer to the solid. The results are compared with a well-known analytical solution (neglecting conduction effects) for the temperature distribution in the exchanger. Next, the analytical solution is used to derive an expression for the exergy destruction. The optimal length of the MBHE depends mainly on the flow rate and does not depend on particle diameter unless they become very small (thus increasing sharply the pressure drop). The exergy optimal length is always smaller than the thermal one, although the difference is itself small.

  16. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  17. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  18. Query Optimization in Distributed Databases.

    DTIC Science & Technology

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  19. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  20. Analytic solution to variance optimization with no short positions

    NASA Astrophysics Data System (ADS)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  1. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  2. Trends in Process Analytical Technology: Present State in Bioprocessing.

    PubMed

    Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian

    2017-08-04

    Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.

  3. Analytically solvable chaotic oscillator based on a first-order filter.

    PubMed

    Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N

    2016-02-01

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.

  4. Analytically solvable chaotic oscillator based on a first-order filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.

    2016-02-15

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less

  5. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  6. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  7. Analytical design and evaluation of an active control system for helicopter vibration reduction and gust response alleviation

    NASA Technical Reports Server (NTRS)

    Taylor, R. B.; Zwicke, P. E.; Gold, P.; Miao, W.

    1980-01-01

    An analytical study was conducted to define the basic configuration of an active control system for helicopter vibration and gust response alleviation. The study culminated in a control system design which has two separate systems: narrow band loop for vibration reduction and wider band loop for gust response alleviation. The narrow band vibration loop utilizes the standard swashplate control configuration to input controller for the vibration loop is based on adaptive optimal control theory and is designed to adapt to any flight condition including maneuvers and transients. The prime characteristics of the vibration control system is its real time capability. The gust alleviation control system studied consists of optimal sampled data feedback gains together with an optimal one-step-ahead prediction. The prediction permits the estimation of the gust disturbance which can then be used to minimize the gust effects on the helicopter.

  8. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  9. A Systematic Mapping on the Learning Analytics Field and Its Analysis in the Massive Open Online Courses Context

    ERIC Educational Resources Information Center

    Moissa, Barbara; Gasparini, Isabela; Kemczinski, Avanilde

    2015-01-01

    Learning Analytics (LA) is a field that aims to optimize learning through the study of dynamical processes occurring in the students' context. It covers the measurement, collection, analysis and reporting of data about students and their contexts. This study aims at surveying existing research on LA to identify approaches, topics, and needs for…

  10. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  11. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    PubMed

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  13. Analytical and experimental study of resonance ignition tubes

    NASA Technical Reports Server (NTRS)

    Stabinsky, L.

    1973-01-01

    The application of the gas-dynamic resonance concept was investigated in relation to ignition of rocket propulsion systems. Analytical studies were conducted to delineate the potential uses of resonance ignition in oxygen/hydrogen bipropellant and hydrazine monopropellant rocket engines. Experimental studies were made to: (1) optimize the resonance igniter configuration, and (2) evaluate the ignition characteristics when operating with low temperature oxygen and hydrogen at the inlet to the igniter.

  14. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  15. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  17. Performance enhancement of Pt/TiO2/Si UV-photodetector by optimizing light trapping capability and interdigitated electrodes geometry

    NASA Astrophysics Data System (ADS)

    Bencherif, H.; Djeffal, F.; Ferhati, H.

    2016-09-01

    This paper presents a hybrid approach based on an analytical and metaheuristic investigation to study the impact of the interdigitated electrodes engineering on both speed and optical performance of an Interdigitated Metal-Semiconductor-Metal Ultraviolet Photodetector (IMSM-UV-PD). In this context, analytical models regarding the speed and optical performance have been developed and validated by experimental results, where a good agreement has been recorded. Moreover, the developed analytical models have been used as objective functions to determine the optimized design parameters, including the interdigit configuration effect, via a Multi-Objective Genetic Algorithm (MOGA). The ultimate goal of the proposed hybrid approach is to identify the optimal design parameters associated with the maximum of electrical and optical device performance. The optimized IMSM-PD not only reveals superior performance in terms of photocurrent and response time, but also illustrates higher optical reliability against the optical losses due to the active area shadowing effects. The advantages offered by the proposed design methodology suggest the possibility to overcome the most challenging problem with the communication speed and power requirements of the UV optical interconnect: high derived current and commutation speed in the UV receiver.

  18. A Requirements-Driven Optimization Method for Acoustic Liners Using Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Lopes, Leonard V.

    2017-01-01

    More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. In a previous paper on this subject, a method deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external optimizer. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an optimization process. Analytic derivatives improve the efficiency and accuracy of gradient-based optimization methods and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective optimization.

  19. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  20. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  1. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  2. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  4. Available Resources | Division of Cancer Prevention

    Cancer.gov

    Preclinical pharmacology and efficacy studies Identification and evaluation of intermediate biomarkers Formulation optimization for enhanced bioavailability and clinical usefulness Analytical method development for investigational agents in bulk form and in biological fluids and tissues PK and PK-PD modeling to optimize dosing regimen Scale-up non-cGMP and cGMP production of

  5. Information Based Numerical Practice.

    DTIC Science & Technology

    1987-02-01

    characterization by comparative computational studies of various benchmark problems. See e.g. [MacNeal, Harder (1985)], [Robinson, Blackham (1981)] any...FOR NONADAPTIVE METHODS 2.1. THE QUADRATURE FORMULA The simplest example studied in detail in the literature is the problem of the optimal quadrature...formulae and the functional analytic prerequisites for the study of optimal formulae, we refer to the large monography (808 p) of [Sobolev (1974)]. Let us

  6. Predictive Analytics for Coordinated Optimization in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  7. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    PubMed

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.

  9. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  10. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  11. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  12. Looking for new biomarkers of skin wound vitality with a cytokine-based multiplex assay: preliminary study.

    PubMed

    Peyron, Pierre-Antoine; Baccino, Éric; Nagot, Nicolas; Lehmann, Sylvain; Delaby, Constance

    2017-02-01

    Determination of skin wound vitality is an important issue in forensic practice. No reliable biomarker currently exists. Quantification of inflammatory cytokines in injured skin with MSD ® technology is an innovative and promising approach. This preliminary study aims to develop a protocol for the preparation and the analysis of skin samples. Samples from ante mortem wounds, post mortem wounds, and intact skin ("control samples") were taken from corpses at the autopsy. After an optimization of the pre-analytical protocol had been performed in terms of skin homogeneisation and proteic extraction, the concentration of TNF-α was measured in each sample with the MSD ® approach. Then five other cytokines of interest (IL-1β, IL-6, IL-10, IL-12p70 and IFN-γ) were simultaneously quantified with a MSD ® multiplex assay. The optimal pre-analytical conditions consist in a proteic extraction from a 6 mm diameter skin sample, in a PBS buffer with triton 0,05%. Our results show the linearity and the reproductibility of the TNF-α quantification with MSD ® , and an inter- and intra-individual variability of the concentrations of proteins. The MSD ® multiplex assay is likely to detect differential skin concentrations for each cytokine of interest. This preliminary study was used to develop and optimize the pre-analytical and analytical conditions of the MSD ® method using injured and healthy skin samples, for the purpose of looking for and identifying the cytokine, or the set of cytokines, that may be biomarkers of skin wound vitality.

  13. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  14. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  15. Impact of a Flexible Evaluation System on Effort and Timing of Study

    ERIC Educational Resources Information Center

    Pacharn, Parunchana; Bay, Darlene; Felton, Sandra

    2012-01-01

    This paper examines results of a flexible grading system that allows each student to influence the weight allocated to each performance measure. We construct a stylized model to determine students' optimal responses. Our analytical model predicts different optimal strategies for students with varying academic abilities: a frontloading strategy for…

  16. Integration of fuzzy analytic hierarchy process and probabilistic dynamic programming in formulating an optimal fleet management model

    NASA Astrophysics Data System (ADS)

    Teoh, Lay Eng; Khoo, Hooi Ling

    2013-09-01

    This study deals with two major aspects of airlines, i.e. supply and demand management. The aspect of supply focuses on the mathematical formulation of an optimal fleet management model to maximize operational profit of the airlines while the aspect of demand focuses on the incorporation of mode choice modeling as parts of the developed model. The proposed methodology is outlined in two-stage, i.e. Fuzzy Analytic Hierarchy Process is first adopted to capture mode choice modeling in order to quantify the probability of probable phenomena (for aircraft acquisition/leasing decision). Then, an optimization model is developed as a probabilistic dynamic programming model to determine the optimal number and types of aircraft to be acquired and/or leased in order to meet stochastic demand during the planning horizon. The findings of an illustrative case study show that the proposed methodology is viable. The results demonstrate that the incorporation of mode choice modeling could affect the operational profit and fleet management decision of the airlines at varying degrees.

  17. An analytic model for footprint dispersions and its application to mission design

    NASA Technical Reports Server (NTRS)

    Rao, J. R. Jagannatha; Chen, Yi-Chao

    1992-01-01

    This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the Taguchi method. We have studied the mathematical model that forms the basis of the Simulation and Optimization of Rocket Trajectories (SORT) program and developed an analytic method for determining mission reliability with a reduced number of flight simulations. This method can be incorporated in a design algorithm to mathematically optimize different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration.more » These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.« less

  19. Design of Structurally Efficient Tapered Struts

    NASA Technical Reports Server (NTRS)

    Messinger, Ross

    2010-01-01

    This report describes the analytical study of two full-scale tapered composite struts. The analytical study resulted in the design of two structurally efficient carbon/epoxy struts in accordance with NASA-specified geometries and loading conditions. Detailed stress analysis was performed of the insert, end fitting, and strut body to obtain an optimized weight with positive margins. Two demonstration struts were fabricated based on a well-established design from a previous Space Shuttle strut development program.

  20. Light distribution in diffractive multifocal optics and its optimization.

    PubMed

    Portney, Valdemar

    2011-11-01

    To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Determination of Ignitable Liquids in Fire Debris: Direct Analysis by Electronic Nose

    PubMed Central

    Ferreiro-González, Marta; Barbero, Gerardo F.; Palma, Miguel; Ayuso, Jesús; Álvarez, José A.; Barroso, Carmelo G.

    2016-01-01

    Arsonists usually use an accelerant in order to start or accelerate a fire. The most widely used analytical method to determine the presence of such accelerants consists of a pre-concentration step of the ignitable liquid residues followed by chromatographic analysis. A rapid analytical method based on headspace-mass spectrometry electronic nose (E-Nose) has been developed for the analysis of Ignitable Liquid Residues (ILRs). The working conditions for the E-Nose analytical procedure were optimized by studying different fire debris samples. The optimized experimental variables were related to headspace generation, specifically, incubation temperature and incubation time. The optimal conditions were 115 °C and 10 min for these two parameters. Chemometric tools such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) were applied to the MS data (45–200 m/z) to establish the most suitable spectroscopic signals for the discrimination of several ignitable liquids. The optimized method was applied to a set of fire debris samples. In order to simulate post-burn samples several ignitable liquids (gasoline, diesel, citronella, kerosene, paraffin) were used to ignite different substrates (wood, cotton, cork, paper and paperboard). A full discrimination was obtained on using discriminant analysis. This method reported here can be considered as a green technique for fire debris analyses. PMID:27187407

  2. Use of fractional factorial design for optimization of digestion procedures followed by multi-element determination of essential and non-essential elements in nuts using ICP-OES technique.

    PubMed

    Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A

    2007-01-15

    Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.

  3. The analytical approach to optimization of active region structure of quantum dot laser

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  4. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  5. Analytical and experimental studies of an optimum multisegment phased liner noise suppression concept

    NASA Technical Reports Server (NTRS)

    Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.

    1976-01-01

    Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.

  6. Optimal starting conditions for the rendezvous maneuver: Analytical and computational approach

    NASA Astrophysics Data System (ADS)

    Ciarcia, Marco

    The three-dimensional rendezvous between two spacecraft is considered: a target spacecraft on a circular orbit around the Earth and a chaser spacecraft initially on some elliptical orbit yet to be determined. The chaser spacecraft has variable mass, limited thrust, and its trajectory is governed by three controls, one determining the thrust magnitude and two determining the thrust direction. We seek the time history of the controls in such a way that the propellant mass required to execute the rendezvous maneuver is minimized. Two cases are considered: (i) time-to-rendezvous free and (ii) time-to-rendezvous given, respectively equivalent to (i) free angular travel and (ii) fixed angular travel for the target spacecraft. The above problem has been studied by several authors under the assumption that the initial separation coordinates and the initial separation velocities are given, hence known initial conditions for the chaser spacecraft. In this paper, it is assumed that both the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given so as to prevent the occurrence of trivial solutions. Two approaches are employed: optimal control formulation (Part A) and mathematical programming formulation (Part B). In Part A, analyses are performed with the multiple-subarc sequential gradient-restoration algorithm for optimal control problems. They show that the fuel-optimal trajectory is zero-bang, namely it is characterized by two subarcs: a long coasting zero-thrust subarc followed by a short powered max-thrust braking subarc. While the thrust direction of the powered subarc is continuously variable for the optimal trajectory, its replacement with a constant (yet optimized) thrust direction produces a very efficient guidance trajectory. Indeed, for all values of the initial distance, the fuel required by the guidance trajectory is within less than one percent of the fuel required by the optimal trajectory. For the guidance trajectory, because of the replacement of the variable thrust direction of the powered subarc with a constant thrust direction, the optimal control problem degenerates into a mathematical programming problem with a relatively small number of degrees of freedom, more precisely: three for case (i) time-to-rendezvous free and two for case (ii) time-to-rendezvous given. In particular, we consider the rendezvous between the Space Shuttle (chaser) and the International Space Station (target). Once a given initial distance SS-to-ISS is preselected, the present work supplies not only the best initial conditions for the rendezvous trajectory, but simultaneously the corresponding final conditions for the ascent trajectory. In Part B, an analytical solution of the Clohessy-Wiltshire equations is presented (i) neglecting the change of the spacecraft mass due to the fuel consumption and (ii) and assuming that the thrust is finite, that is, the trajectory includes powered subarcs flown with max thrust and coasting subarc flown with zero thrust. Then, employing the found analytical solution, we study the rendezvous problem under the assumption that the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given. The main contribution of Part B is the development of analytical solutions for the powered subarcs, an important extension of the analytical solutions already available for the coasting subarcs. One consequence is that the entire optimal trajectory can be described analytically. Another consequence is that the optimal control problems degenerate into mathematical programming problems. A further consequence is that, vis-a-vis the optimal control formulation, the mathematical programming formulation reduces the CPU time by a factor of order 1000. Key words. Space trajectories, rendezvous, optimization, guidance, optimal control, calculus of variations, Mayer problems, Bolza problems, transformation techniques, multiple-subarc sequential gradient-restoration algorithm.

  7. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  9. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  10. Optimal design of damping layers in SMA/GFRP laminated hybrid composites

    NASA Astrophysics Data System (ADS)

    Haghdoust, P.; Cinquemani, S.; Lo Conte, A.; Lecis, N.

    2017-10-01

    This work describes the optimization of the shape profiles for shape memory alloys (SMA) sheets in hybrid layered composite structures, i.e. slender beams or thinner plates, designed for the passive attenuation of flexural vibrations. The paper starts with the description of the material and architecture of the investigated hybrid layered composite. An analytical method, for evaluating the energy dissipation inside a vibrating cantilever beam is developed. The analytical solution is then followed by a shape profile optimization of the inserts, using a genetic algorithm to minimize the SMA material layer usage, while maintaining target level of structural damping. Delamination problem at SMA/glass fiber reinforced polymer interface is discussed. At the end, the proposed methodology has been applied to study the hybridization of a wind turbine layered structure blade with SMA material, in order to increase its passive damping.

  11. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  12. Optimality study of a gust alleviation system for light wing-loading STOL aircraft

    NASA Technical Reports Server (NTRS)

    Komoda, M.

    1976-01-01

    An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.

  13. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  14. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  15. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  16. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  17. Summary of Optimization Techniques That Can Be Applied to Suspension System Design

    DOT National Transportation Integrated Search

    1973-03-01

    Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...

  18. Method of multi-dimensional moment analysis for the characterization of signal peaks

    DOEpatents

    Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

    2012-10-23

    A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

  19. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  20. Analytical Optimization of the Net Residual Dispersion in SPM-Limited Dispersion-Managed Systems

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaosheng; Gao, Shiming; Tian, Yu; Yang, Changxi

    2006-05-01

    Dispersion management is an effective technique to suppress the nonlinear impairment in fiber transmission systems, which includes tuning the amounts of precompensation, residual dispersion per span (RDPS), and net residual dispersion (NRD) of the systems. For self-phase modulation (SPM)-limited systems, optimizing the NRD is necessary because it can greatly improve the system performance. In this paper, an analytical method is presented to optimize NRD for SPM-limited dispersion-managed systems. The method is based on the correlation between the nonlinear impairment and the output pulse broadening of SPM-limited systems; therefore, dispersion-managed systems can be optimized through minimizing the output single-pulse broadening. A set of expressions is derived to calculate the output pulse broadening of the SPM-limited dispersion-managed system, from which the analytical result of optimal NRD is obtained. Furthermore, with the expressions of pulse broadening, how the nonlinear impairment depends on the amounts of precompensation and RDPS can be revealed conveniently.

  1. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  2. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  3. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  4. Modeling and design optimization of adhesion between surfaces at the microscale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylves, Kevin T.

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  5. An optical fusion gate for W-states

    NASA Astrophysics Data System (ADS)

    Özdemir, Ş. K.; Matsunaga, E.; Tashima, T.; Yamamoto, T.; Koashi, M.; Imoto, N.

    2011-10-01

    We introduce a simple optical gate to fuse arbitrary-size polarization entangled W-states to prepare larger W-states. The gate requires a polarizing beam splitter (PBS), a half-wave plate (HWP) and two photon detectors. We study, numerically and analytically, the necessary resource consumption for preparing larger W-states by fusing smaller ones with the proposed fusion gate. We show analytically that resource requirement scales at most sub-exponentially with the increasing size of the state to be prepared. We numerically determine the resource cost for fusion without recycling where W-states of arbitrary size can be optimally prepared. Moreover, we introduce another strategy that is based on recycling and outperforms the optimal strategy for the non-recycling case.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  7. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  8. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng

    2011-05-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.

  9. Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.

  10. Analytic Optimization of Near-Field Optical Chirality Enhancement

    PubMed Central

    2017-01-01

    We present an analytic derivation for the enhancement of local optical chirality in the near field of plasmonic nanostructures by tuning the far-field polarization of external light. We illustrate the results by means of simulations with an achiral and a chiral nanostructure assembly and demonstrate that local optical chirality is significantly enhanced with respect to circular polarization in free space. The optimal external far-field polarizations are different from both circular and linear. Symmetry properties of the nanostructure can be exploited to determine whether the optimal far-field polarization is circular. Furthermore, the optimal far-field polarization depends on the frequency, which results in complex-shaped laser pulses for broadband optimization. PMID:28239617

  11. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  12. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  13. Spatiotemporal and geometric optimization of sensor arrays for detecting analytes fluids

    DOEpatents

    Lewis, Nathan S.; Freund, Michael S.; Briglin, Shawn M.; Tokumaru, Phil; Martin, Charles R.; Mitchell, David T.

    2006-10-17

    Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.

  14. Spatiotemporal and geometric optimization of sensor arrays for detecting analytes in fluids

    DOEpatents

    Lewis, Nathan S [La Canada, CA; Freund, Michael S [Winnipeg, CA; Briglin, Shawn S [Chittenango, NY; Tokumaru, Phillip [Moorpark, CA; Martin, Charles R [Gainesville, FL; Mitchell, David [Newtown, PA

    2009-09-29

    Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.

  15. Perspectives on optimization of vaccination and immunization of Ethiopian children/women: what should and can we further do? Why and how?

    PubMed

    Gebremariam, Mulugeta Betre

    2012-04-01

    Vaccination and immunization of children and child-bearing women, in particular, is uniquely important public health intervention Ethiopia inclusive. In spite of the promising progresses, much is desired toward the ultimate optimization, effectiveness and protection. This analytical discourse-recourse piece of work aimed at flagging the optimization perspectives on the basis of readily available information. CONTEXT, MATERIALS AND METHODS: The study emerged consequent to the review and capacity enhancement workshop of experts on Reaching Every District (RED) strategy of the Eastern and Southern African countries which was hosted by the WHO Afro Country Support Team for Eastern and Southern Africa in Harare, Zimbabwe, 28 February, - 03, March 2012. The study, essentially, is a qualitative analytical review of the pertinent literature with a particular focus on Ethiopia. Both peer reviewed and published and gray (unpublished) pertinent literature were solicited and reviewed systematically. The analytical discourse focused on performance progresses, achievements, opportunities, gaps/shortcomings, challenges and threats and perspectives. Vaccination-immunization performance evidences which were consolidated by the WHO Afro Country Support Team served the starting point to the central analytical discussion. KEY FINDINGS AND REFLECTIONS: Without underestimating the progresses and successes registered thus far, however, there indeed are quite many areas that warrant further discourse and/or recourse in Ethiopia in particular. Compared with other member countries, the size of the unimmunized, reporting quality, fragileness of systems, weak capacity, resource limitation, and others in particular respect to Ethiopia deserve further concerted attention. Districts with under 80% DPT3 coverage were still too many for Ethiopia by 2010/11. Whilst the challenges appeared prevalent, but more so effective and maximal use of the readily available opportunities appeared even more crucial. Further and dynamic optimization is desired more than ever before. Presumably promising and realistic enough recommendation perspectives are duly highlighted.

  16. Semi-analytic valuation of stock loans with finite maturity

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoping; Putri, Endah R. M.

    2015-10-01

    In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.

  17. Optimizing an immersion ESL curriculum using analytic hierarchy process.

    PubMed

    Tang, Hui-Wen Vivian

    2011-11-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative importance of course criteria for the purpose of tailoring an optimal one-week immersion English as a second language (ESL) curriculum for elementary school students in a suburban county of Taiwan. The hierarchy model and AHP analysis utilized in the present study will be useful for resolving several important multi-criteria decision-making issues in planning and evaluating ESL programs. This study also offers valuable insights and provides a basis for further research in customizing ESL curriculum models for different student populations with distinct learning needs, goals, and socioeconomic backgrounds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  19. Supercritical fluid chromatography of metoprolol and analogues on aminopropyl and ethylpyridine silica without any additives.

    PubMed

    Lundgren, Johanna; Salomonsson, John; Gyllenhaal, Olle; Johansson, Erik

    2007-06-22

    Metoprolol and a number of related amino alcohols and similar analytes have been chromatographed on aminopropyl (APS) and ethylpyridine (EPS) silica columns. The mobile phase was carbon dioxide with methanol as modifier and no amine additive was present. Optimal isocratic conditions for the selectivity were evaluated based on experiments using design of experiments. A central composite circumscribed model for each column was used. Factors were column temperature, back-pressure and % (v/v) of modifier. The responses were retention and selectivity versus metoprolol. The % of modifier mainly controlled the retention on both columns but pressure and temperature could also be important for optimizing the selectivity between the amino alcohols. The compounds could be divided into four and five groups on both columns, with respect to the selectivity. Furthermore, on the aminopropyl silica the analytes were more spread out whereas on the ethylpyridine silica, due to its aromaticity, retention and selectivity were closer. For optimal conditions the column temperature and back-pressure should be high and the modifier concentration low. A comparison of the selectivity using optimized conditions show a few switches of retention order between the two columns. On aminopropyl silica an aldehyde failed to be eluted owing to Schiff-base formation. Peak symmetry and column efficiency were briefly studied for some structurally close analogues. This revealed some activity from the columns that affected analytes that had less protected amino groups, a methyl group instead of isopropyl. The tailing was more marked with the ethylpyridine column even with the more bulky alkyl substituents. Plate number N was a better measure than the asymmetry factor since some analyte peaks broadened without serious deterioration of symmetry compared to homologues.

  20. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  1. Optimizing liquid effluent monitoring at a large nuclear complex.

    PubMed

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  2. Body fluid analysis: clinical utility and applicability of published studies to guide interpretation of today's laboratory testing in serous fluids.

    PubMed

    Block, Darci R; Algeciras-Schimnich, Alicia

    2013-01-01

    Requests for testing various analytes in serous fluids (e.g., pleural, peritoneal, pericardial effusions) are submitted daily to clinical laboratories. Testing of these fluids deviates from assay manufacturers' specifications, as most laboratory assays are optimized for testing blood or urine specimens. These requests add a burden to clinical laboratories, which need to validate assay performance characteristics in these fluids to exclude matrix interferences (given the different composition of body fluids) while maintaining regulatory compliance. Body fluid testing for a number of analytes has been reported in the literature; however, understanding the clinical utility of these analytes is critical because laboratories must address the analytic and clinical validation requirements, while educating clinicians on proper test utilization. In this article, we review the published data to evaluate the clinical utility of testing for numerous analytes in body fluid specimens. We also highlight the pre-analytic and analytic variables that need to be considered when reviewing published studies in body fluid testing. Finally, we provide guidance on how published studies might (or might not) guide interpretation of test results in today's clinical laboratories.

  3. Analytical solutions to optimal underactuated spacecraft formation reconfiguration

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2015-11-01

    Underactuated systems can generally be defined as systems with fewer number of control inputs than that of the degrees of freedom to be controlled. In this paper, analytical solutions to optimal underactuated spacecraft formation reconfiguration without either the radial or the in-track control are derived. By using a linear dynamical model of underactuated spacecraft formation in circular orbits, controllability analysis is conducted for either underactuated case. Indirect optimization methods based on the minimum principle are then introduced to generate analytical solutions to optimal open-loop underactuated reconfiguration problems. Both fixed and free final conditions constraints are considered for either underactuated case and comparisons between these two final conditions indicate that the optimal control strategies with free final conditions require less control efforts than those with the fixed ones. Meanwhile, closed-loop adaptive sliding mode controllers for both underactuated cases are designed to guarantee optimal trajectory tracking in the presence of unmatched external perturbations, linearization errors, and system uncertainties. The adaptation laws are designed via a Lyapunov-based method to ensure the overall stability of the closed-loop system. The explicit expressions of the terminal convergent regions of each system states have also been obtained. Numerical simulations demonstrate the validity and feasibility of the proposed open-loop and closed-loop control schemes for optimal underactuated spacecraft formation reconfiguration in circular orbits.

  4. SSME single crystal turbine blade dynamics

    NASA Technical Reports Server (NTRS)

    Moss, Larry A.; Smith, Todd E.

    1987-01-01

    A study was performed to determine the dynamic characteristics of the Space Shuttle main engine high pressure fuel turbopump (HPFTP) blades made of single crystal (SC) material. The first and second stage drive turbine blades of HPFTP were examined. The nonrotating natural frequencies were determined experimentally and analytically. The experimental results of the SC second stage blade were used to verify the analytical procedures. The analytical study examined the SC first stage blade natural frequencies with respect to crystal orientation at typical operating conditions. The SC blade dynamic response was predicted to be less than the directionally solidified blade. Crystal axis orientation optimization indicated the third mode interference will exist in any SC orientation.

  5. Analytic theory of alternate multilayer gratings operating in single-order regime.

    PubMed

    Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan

    2017-07-10

    Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.

  6. Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization

    PubMed Central

    Vutova, Katia; Donchev, Veliko

    2013-01-01

    Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351

  7. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  8. Standardless quantification by parameter optimization in electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-11-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.

  9. A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.

    PubMed

    Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie

    2018-01-01

    In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Flow rate of transport network controls uniform metabolite supply to tissue

    PubMed Central

    Meigel, Felix J.

    2018-01-01

    Life and functioning of higher organisms depends on the continuous supply of metabolites to tissues and organs. What are the requirements on the transport network pervading a tissue to provide a uniform supply of nutrients, minerals or hormones? To theoretically answer this question, we present an analytical scaling argument and numerical simulations on how flow dynamics and network architecture control active spread and uniform supply of metabolites by studying the example of xylem vessels in plants. We identify the fluid inflow rate as the key factor for uniform supply. While at low inflow rates metabolites are already exhausted close to flow inlets, too high inflow flushes metabolites through the network and deprives tissue close to inlets of supply. In between these two regimes, there exists an optimal inflow rate that yields a uniform supply of metabolites. We determine this optimal inflow analytically in quantitative agreement with numerical results. Optimizing network architecture by reducing the supply variance over all network tubes, we identify patterns of tube dilation or contraction that compensate sub-optimal supply for the case of too low or too high inflow rate. PMID:29720455

  11. Development and optimization of an energy-regenerative suspension system under stochastic road excitation

    NASA Astrophysics Data System (ADS)

    Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad

    2015-11-01

    In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.

  12. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    PubMed

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  13. Back analysis of geomechanical parameters in underground engineering using artificial bee colony.

    PubMed

    Zhu, Changxing; Zhao, Hongbo; Zhao, Ming

    2014-01-01

    Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.

  14. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  15. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  16. Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays

    PubMed Central

    Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.

    2017-01-01

    Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034

  17. Implementation and application of moving average as continuous analytical quality control instrument demonstrated for 24 routine chemistry assays.

    PubMed

    Rossum, Huub H van; Kemperman, Hans

    2017-07-26

    General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.

  18. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  19. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  20. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  1. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  2. Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"

    NASA Astrophysics Data System (ADS)

    Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.

    2018-03-01

    In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.

  3. An Analytical Approach to Salary Evaluation for Educational Personnel

    ERIC Educational Resources Information Center

    Bruno, James Edward

    1969-01-01

    "In this study a linear programming model for determining an 'optimal' salary schedule was derived then applied to an educational salary structure. The validity of the model and the effectiveness of the approach were established. (Author)

  4. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  5. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  6. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  7. Engineering report. Part 2: NASA wheel and brake material tradeoff study for space shuttle type environmental requirements

    NASA Technical Reports Server (NTRS)

    Bok, L. D.

    1973-01-01

    The study included material selection and trade-off for the structural components of the wheel and brake optimizing weight vs cost and feasibility for the space shuttle type application. Analytical methods were used to determine section thickness for various materials, and a table was constructed showing weight vs. cost trade-off. The wheel and brake were further optimized by considering design philosophies that deviate from standard aircraft specifications, and designs that best utilize the materials being considered.

  8. Analytic model for academic research productivity having factors, interactions and implications

    PubMed Central

    2011-01-01

    Financial support is dear in academia and will tighten further. How can the research mission be accomplished within new restraints? A model is presented for evaluating source components of academic research productivity. It comprises six factors: funding; investigator quality; efficiency of the research institution; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Their interactions produce output and patterned influences between factors. Strategies for optimizing output are enabled. PMID:22130145

  9. Analytical solution of a stochastic model of risk spreading with global coupling

    NASA Astrophysics Data System (ADS)

    Morita, Satoru; Yoshimura, Jin

    2013-11-01

    We study a stochastic matrix model to understand the mechanics of risk spreading (or bet hedging) by dispersion. Up to now, this model has been mostly dealt with numerically, except for the well-mixed case. Here, we present an analytical result that shows that optimal dispersion leads to Zipf's law. Moreover, we found that the arithmetic ensemble average of the total growth rate converges to the geometric one, because the sample size is finite.

  10. The Analytic Methods of Operations Research

    DTIC Science & Technology

    1977-01-01

    stock market behavior (Fama, 1970), but few other applications . A 2*1 - --- 41 12. QUEUEING THEORY The study of congestion in service...Behavior," by T. von Neumann and 0. MHrgenstern, and an esoteric j - 2 paperbrtk by Charnes. Cooper, and Henderson on the optimal mixing of peanuKs and...2nd-order conditions, then i X is also globally optimal . This enables one to use local exploration to lead to the global

  11. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  12. Layer-switching cost and optimality in information spreading on multiplex networks

    PubMed Central

    Min, Byungjoon; Gwak, Sang-Hwan; Lee, Nanoom; Goh, K. -I.

    2016-01-01

    We study a model of information spreading on multiplex networks, in which agents interact through multiple interaction channels (layers), say online vs. offline communication layers, subject to layer-switching cost for transmissions across different interaction layers. The model is characterized by the layer-wise path-dependent transmissibility over a contact, that is dynamically determined dependently on both incoming and outgoing transmission layers. We formulate an analytical framework to deal with such path-dependent transmissibility and demonstrate the nontrivial interplay between the multiplexity and spreading dynamics, including optimality. It is shown that the epidemic threshold and prevalence respond to the layer-switching cost non-monotonically and that the optimal conditions can change in abrupt non-analytic ways, depending also on the densities of network layers and the type of seed infections. Our results elucidate the essential role of multiplexity that its explicit consideration should be crucial for realistic modeling and prediction of spreading phenomena on multiplex social networks in an era of ever-diversifying social interaction layers. PMID:26887527

  13. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  15. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  16. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Repeated applications of a transdermal patch: analytical solution and optimal control of the delivery rate.

    PubMed

    Simon, L

    2007-10-01

    The integral transform technique was implemented to solve a mathematical model developed for percutaneous drug absorption. The model included repeated application and removal of a patch from the skin. Fick's second law of diffusion was used to study the transport of a medicinal agent through the vehicle and subsequent penetration into the stratum corneum. Eigenmodes and eigenvalues were computed and introduced into an inversion formula to estimate the delivery rate and the amount of drug in the vehicle and the skin. A dynamic programming algorithm calculated the optimal doses necessary to achieve a desired transdermal flux. The analytical method predicted profiles that were in close agreement with published numerical solutions and provided an automated strategy to perform therapeutic drug monitoring and control.

  18. Overcoming Barriers to Educational Analytics: How Systems Thinking and Pragmatism Can Help

    ERIC Educational Resources Information Center

    Macfadyen, Leah P.

    2017-01-01

    Learning technologies are now commonplace in education, and generate large volumes of educational data. Scholars have argued that analytics can and should be employed to optimize learning and learning environments. This article explores what is really meant by "analytics", describes the current best-known examples of institutional…

  19. Learning Analytics: Potential for Enhancing School Library Programs

    ERIC Educational Resources Information Center

    Boulden, Danielle Cadieux

    2015-01-01

    Learning analytics has been defined as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The potential use of data and learning analytics in educational contexts has caught the attention of educators and…

  20. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics

    PubMed Central

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-01-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379

  1. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    PubMed

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  2. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  3. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  4. Evaluation of performance of three different hybrid mesoporous solids based on silica for preconcentration purposes in analytical chemistry: From the study of sorption features to the determination of elements of group IB.

    PubMed

    Kim, Manuela Leticia; Tudino, Mabel Beatríz

    2010-08-15

    Several studies involving the physicochemical interaction of three silica based hybrid mesoporous materials with metal ions of the group IB have been performed in order to employ them for preconcentration purposes in the determination of traces of Cu(II), Ag(I) and Au(III). The three solids were obtained from mesoporous silica functionalized with 3-aminopropyl (APS), 3-mercaptopropyl (MPS) and N-[2-aminoethyl]-3-aminopropyl (NN) groups, respectively. Adsorption capacities for Au, Cu and Ag were calculated using Langmuir's isotherm model and then, the optimal values for the retention of each element onto each one of the solids were found. Physicochemical data obtained under thermodynamic equilibrium and under kinetic conditions - imposed by flow through experiments - allowed the design of simple analytical methodologies where the solids were employed as fillings of microcolumns held in continuous systems coupled on-line to an atomic absorption spectrometry. In order to control the interaction between the filling and the analyte at short times (flow through conditions) and thus, its effect on the analytical signal and the presence of interferences, the initial adsorption velocities were calculated using the pseudo second order model. All these experiments allowed the comparison of the solids in terms of their analytical behaviour at the moment of facing the determination of the three elements. Under optimized conditions mainly given by the features of the filling, the analytical methodologies developed in this work showed excellent performances with limits of detection of 0.14, 0.02 and 0.025 microg L(-1) and RSD % values of 3.4, 2.7 and 3.1 for Au, Cu and Ag, respectively. A full discussion of the main findings on the interaction metal ions/fillings will be provided. The analytical results for the determination of the three metals will be also presented. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  6. Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L

    2017-11-13

    A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results aremore » then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.« less

  7. Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation

    NASA Astrophysics Data System (ADS)

    Christ, John A.; Goltz, Mark N.

    2002-05-01

    We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.

  8. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  9. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  10. A Novel Platform for Evaluating the Environmental Impacts on Bacterial Cellulose Production.

    PubMed

    Basu, Anindya; Vadanan, Sundaravadanam Vishnu; Lim, Sierin

    2018-04-10

    Bacterial cellulose (BC) is a biocompatible material with versatile applications. However, its large-scale production is challenged by the limited biological knowledge of the bacteria. The advent of synthetic biology has lead the way to the development of BC producing microbes as a novel chassis. Hence, investigation on optimal growth conditions for BC production and understanding of the fundamental biological processes are imperative. In this study, we report a novel analytical platform that can be used for studying the biology and optimizing growth conditions of cellulose producing bacteria. The platform is based on surface growth pattern of the organism and allows us to confirm that cellulose fibrils produced by the bacteria play a pivotal role towards their chemotaxis. The platform efficiently determines the impacts of different growth conditions on cellulose production and is translatable to static culture conditions. The analytical platform provides a means for fundamental biological studies of bacteria chemotaxis as well as systematic approach towards rational design and development of scalable bioprocessing strategies for industrial production of bacterial cellulose.

  11. Next Generation Offline Approaches to Trace Gas-Phase Organic Compound Speciation: Sample Collection and Analysis

    NASA Astrophysics Data System (ADS)

    Sheu, R.; Marcotte, A.; Khare, P.; Ditto, J.; Charan, S.; Gentner, D. R.

    2017-12-01

    Intermediate-volatility and semi-volatile organic compounds (I/SVOCs) are major precursors to secondary organic aerosol, and contribute to tropospheric ozone formation. Their wide volatility range, chemical complexity, behavior in analytical systems, and trace concentrations present numerous hurdles to characterization. We present an integrated sampling-to-analysis system for the collection and offline analysis of trace gas-phase organic compounds with the goal of preserving and recovering analytes throughout sample collection, transport, storage, and thermal desorption for accurate analysis. Custom multi-bed adsorbent tubes are used to collect samples for offline analysis by advanced analytical detectors. The analytical instrumentation comprises an automated thermal desorption system that introduces analytes from the adsorbent tubes into a gas chromatograph, which is coupled with an electron ionization mass spectrometer (GC-EIMS) and other detectors. In order to optimize the collection and recovery for a wide range of analyte volatility and functionalization, we evaluated a variety of commercially-available materials, including Res-Sil beads, quartz wool, glass beads, Tenax TA, and silica gel. Key properties for optimization include inertness, versatile chemical capture, minimal affinity for water, and minimal artifacts or degradation byproducts; these properties were assessed with a diverse mix of traditionally-measured and functionalized analytes. Along with a focus on material selection, we provide recommendations spanning the entire sampling-and-analysis process to improve the accuracy of future comprehensive I/SVOC measurements, including oxygenated and other functionalized I/SVOCs. We demonstrate the performance of our system by providing results on speciated VOCs-SVOCs from indoor, outdoor, and chamber studies that establish the utility of our protocols and pave the way for precise laboratory characterization via a mix of detection methods.

  12. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  13. Analytically optimal parameters of dynamic vibration absorber with negative stiffness

    NASA Astrophysics Data System (ADS)

    Shen, Yongjun; Peng, Haibo; Li, Xianghong; Yang, Shaopu

    2017-02-01

    In this paper the optimal parameters of a dynamic vibration absorber (DVA) with negative stiffness is analytically studied. The analytical solution is obtained by Laplace transform method when the primary system is subjected to harmonic excitation. The research shows there are still two fixed points independent of the absorber damping in the amplitude-frequency curve of the primary system when the system contains negative stiffness. Then the optimum frequency ratio and optimum damping ratio are respectively obtained based on the fixed-point theory. A new strategy is proposed to obtain the optimum negative stiffness ratio and make the system remain stable at the same time. At last the control performance of the presented DVA is compared with those of three existing typical DVAs, which were presented by Den Hartog, Ren and Sims respectively. The comparison results in harmonic and random excitation show that the presented DVA in this paper could not only reduce the peak value of the amplitude-frequency curve of the primary system significantly, but also broaden the efficient frequency range of vibration mitigation.

  14. Optimization of dual energy contrast enhanced breast tomosynthesis for improved mammographic lesion detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Saunders, R.; Samei, E.; Badea, C.; Yuan, H.; Ghaghada, K.; Qi, Y.; Hedlund, L. W.; Mukundan, S.

    2008-03-01

    Dual-energy contrast-enhanced breast tomosynthesis has been proposed as a technique to improve the detection of early-stage cancer in young, high-risk women. This study focused on optimizing this technique using computer simulations. The computer simulation used analytical calculations to optimize the signal difference to noise ratio (SdNR) of resulting images from such a technique at constant dose. The optimization included the optimal radiographic technique, optimal distribution of dose between the two single-energy projection images, and the optimal weighting factor for the dual energy subtraction. Importantly, the SdNR included both anatomical and quantum noise sources, as dual energy imaging reduces anatomical noise at the expense of increases in quantum noise. Assuming a tungsten anode, the maximum SdNR at constant dose was achieved for a high energy beam at 49 kVp with 92.5 μm copper filtration and a low energy beam at 49 kVp with 95 μm tin filtration. These analytical calculations were followed by Monte Carlo simulations that included the effects of scattered radiation and detector properties. Finally, the feasibility of this technique was tested in a small animal imaging experiment using a novel iodinated liposomal contrast agent. The results illustrated the utility of dual energy imaging and determined the optimal acquisition parameters for this technique. This work was supported in part by grants from the Komen Foundation (PDF55806), the Cancer Research and Prevention Foundation, and the NIH (NCI R21 CA124584-01). CIVM is a NCRR/NCI National Resource under P41-05959/U24-CA092656.

  15. Diffractive variable beam splitter: optimal design.

    PubMed

    Borghi, R; Cincotti, G; Santarsiero, M

    2000-01-01

    The analytical expression of the phase profile of the optimum diffractive beam splitter with an arbitrary power ratio between the two output beams is derived. The phase function is obtained by an analytical optimization procedure such that the diffraction efficiency of the resulting optical element is the highest for an actual device. Comparisons are presented with the efficiency of a diffractive beam splitter specified by a sawtooth phase function and with the pertinent theoretical upper bound for this type of element.

  16. Heat Transfer Analysis of Thermal Protection Structures for Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Wang, Zhijin; Hou, Tianjiao

    2017-11-01

    This research aims to develop an analytical approach to study the heat transfer problem of thermal protection systems (TPS) for hypersonic vehicles. Laplace transform and integral method are used to describe the temperature distribution through the TPS subject to aerodynamic heating during flight. Time-dependent incident heat flux is also taken into account. Two different cases with heat flux and radiation boundary conditions are studied and discussed. The results are compared with those obtained by finite element analyses and show a good agreement. Although temperature profiles of such problems can be readily accessed via numerical simulations, analytical solutions give a greater insight into the physical essence of the heat transfer problem. Furthermore, with the analytical approach, rapid thermal analyses and even thermal optimization can be achieved during the preliminary TPS design.

  17. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    NASA Astrophysics Data System (ADS)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  18. Optimal design of a thermally stable composite optical bench

    NASA Technical Reports Server (NTRS)

    Gray, C. E., Jr.

    1985-01-01

    The Lidar Atmospheric Sensing Experiment will be performed aboard an ER-2 aircraft; the lidar system used will be mounted on a lightweight, thermally stable graphite/epoxy optical bench whose design is presently subjected to analytical study and experimental validation. Attention is given to analytical methods for the selection of such expected laminate properties as the thermal expansion coefficient, the apparent in-plane moduli, and ultimate strength. For a symmetric laminate in which one of the lamina angles remains variable, an optimal lamina angle is selected to produce a design laminate with a near-zero coefficient of thermal expansion. Finite elements are used to model the structural concept of the design, with a view to the optical bench's thermal structural response as well as the determination of the degree of success in meeting the experiment's alignment tolerances.

  19. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  20. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  1. Micro-focused ultrasonic solid-liquid extraction (muFUSLE) combined with HPLC and fluorescence detection for PAHs determination in sediments: optimization and linking with the analytical minimalism concept.

    PubMed

    Capelo, J L; Galesio, M M; Felisberto, G M; Vaz, C; Pessoa, J Costa

    2005-06-15

    Analytical minimalism is a concept that deals with the optimization of all stages of an analytical procedure so that it becomes less time, cost, sample, reagent and energy consuming. The guide-lines provided in the USEPA extraction method 3550B recommend the use of focused ultrasound (FU), i.e., probe sonication, for the solid-liquid extraction of Polycyclic Aromatic Hydrocarbons, PAHs, but ignore the principle of analytical minimalism. The problems related with the dead sonication zones, often present when high volumes are sonicated with probe, are also not addressed. In this work, we demonstrate that successful extraction and quantification of PAHs from sediments can be done with low sample mass (0.125g), low reagent volume (4ml), short sonication time (3min) and low sonication amplitude (40%). Two variables are here particularly taken into account for total extraction: (i) the design of the extraction vessel and (ii) the solvent used to carry out the extraction. Results showed PAHs recoveries (EPA priority list) ranged between 77 and 101%, accounting for more than 95% for most of the PAHs here studied, as compared with the values obtained after soxhlet extraction. Taking into account the results reported in this work we recommend a revision of the EPA guidelines for PAHs extraction from solid matrices with focused ultrasound, so that these match the analytical minimalism concept.

  2. Liquid chromatography-mass spectrometry in metabolomics research: mass analyzers in ultra high pressure liquid chromatography coupling.

    PubMed

    Forcisi, Sara; Moritz, Franco; Kanawati, Basem; Tziotis, Dimitrios; Lehmann, Rainer; Schmitt-Kopplin, Philippe

    2013-05-31

    The present review gives an introduction into the concept of metabolomics and provides an overview of the analytical tools applied in non-targeted metabolomics with a focus on liquid chromatography (LC). LC is a powerful analytical tool in the study of complex sample matrices. A further development and configuration employing Ultra-High Pressure Liquid Chromatography (UHPLC) is optimized to provide the largest known liquid chromatographic resolution and peak capacity. Reasonably UHPLC plays an important role in separation and consequent metabolite identification of complex molecular mixtures such as bio-fluids. The most sensitive detectors for these purposes are mass spectrometers. Almost any mass analyzer can be optimized to identify and quantify small pre-defined sets of targets; however, the number of analytes in metabolomics is far greater. Optimized protocols for quantification of large sets of targets may be rendered inapplicable. Results on small target set analyses on different sample matrices are easily comparable with each other. In non-targeted metabolomics there is almost no analytical method which is applicable to all different matrices due to limitations pertaining to mass analyzers and chromatographic tools. The specifications of the most important interfaces and mass analyzers are discussed. We additionally provide an exemplary application in order to demonstrate the level of complexity which remains intractable up to date. The potential of coupling a high field Fourier Transform Ion Cyclotron Resonance Mass Spectrometer (ICR-FT/MS), the mass analyzer with the largest known mass resolving power, to UHPLC is given with an example of one human pre-treated plasma sample. This experimental example illustrates one way of overcoming the necessity of faster scanning rates in the coupling with UHPLC. The experiment enabled the extraction of thousands of features (analytical signals). A small subset of this compositional space could be mapped into a mass difference network whose topology shows specificity toward putative metabolite classes and retention time. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Continuous Optimization on Constraint Manifolds

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.

  4. Optimal time-domain technique for pulse width modulation in power electronics

    NASA Astrophysics Data System (ADS)

    Mayergoyz, I.; Tyagi, S.

    2018-05-01

    Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  5. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  6. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  7. Reproducibility studies for experimental epitope detection in macrophages (EDIM).

    PubMed

    Japink, Dennis; Nap, Marius; Sosef, Meindert N; Nelemans, Patty J; Coy, Johannes F; Beets, Geerard; von Meyenfeldt, Maarten F; Leers, Math P G

    2014-05-01

    We have recently described epitope detection in macrophages (EDIM) by flow cytometry. This is a promising tool for the diagnosis and follow-up of malignancies. However, biological and technical validation is warranted before clinical applicability can be explored. The pre-analytic and analytic phases were investigated. Five different aspects were assessed: blood sample stability, intra-individual variability in healthy persons, intra-assay variation, inter-assay variation and assay transferability. The post-analytic phase was already partly standardized and described in an earlier study. The outcomes in the pre-analytic phase showed that samples are stable for 24h after venipuncture. Biological variation over time was similar to that of serum tumor marker assays; each patient has a baseline value. Intra-assay variation showed good reproducibility, while inter-assay variation showed reproducibility similar to that of to established serum tumor marker assays. Furthermore, the assay showed excellent transferability between analyzers. Under optimal analytic conditions the EDIM method is technically stable, reproducible and transferable. Biological variation over time needs further assessment in future work. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. In situ ionic liquid dispersive liquid-liquid microextraction and direct microvial insert thermal desorption for gas chromatographic determination of bisphenol compounds.

    PubMed

    Cacho, Juan Ignacio; Campillo, Natalia; Viñas, Pilar; Hernández-Córdoba, Manuel

    2016-01-01

    A new procedure based on direct insert microvial thermal desorption injection allows the direct analysis of ionic liquid extracts by gas chromatography and mass spectrometry (GC-MS). For this purpose, an in situ ionic liquid dispersive liquid-liquid microextraction (in situ IL DLLME) has been developed for the quantification of bisphenol A (BPA), bisphenol Z (BPZ) and bisphenol F (BPF). Different parameters affecting the extraction efficiency of the microextraction technique and the thermal desorption step were studied. The optimized procedure, determining the analytes as acetyl derivatives, provided detection limits of 26, 18 and 19 ng L(-1) for BPA, BPZ and BPF, respectively. The release of the three analytes from plastic containers was monitored using this newly developed analytical method. Analysis of the migration test solutions for 15 different plastic containers in daily use identified the presence of the analytes at concentrations ranging between 0.07 and 37 μg L(-1) in six of the samples studied, BPA being the most commonly found and at higher concentrations than the other analytes.

  9. Analytical challenges in sports drug testing.

    PubMed

    Thevis, Mario; Krug, Oliver; Geyer, Hans; Walpurgis, Katja; Baume, Norbert; Thomas, Andreas

    2018-03-01

    Analytical chemistry represents a central aspect of doping controls. Routine sports drug testing approaches are primarily designed to address the question whether a prohibited substance is present in a doping control sample and whether prohibited methods (for example, blood transfusion or sample manipulation) have been conducted by an athlete. As some athletes have availed themselves of the substantial breadth of research and development in the pharmaceutical arena, proactive and preventive measures are required such as the early implementation of new drug candidates and corresponding metabolites into routine doping control assays, even though these drug candidates are to date not approved for human use. Beyond this, analytical data are also cornerstones of investigations into atypical or adverse analytical findings, where the overall picture provides ample reason for follow-up studies. Such studies have been of most diverse nature, and tailored approaches have been required to probe hypotheses and scenarios reported by the involved parties concerning the plausibility and consistency of statements and (analytical) facts. In order to outline the variety of challenges that doping control laboratories are facing besides providing optimal detection capabilities and analytical comprehensiveness, selected case vignettes involving the follow-up of unconventional adverse analytical findings, urine sample manipulation, drug/food contamination issues, and unexpected biotransformation reactions are thematized.

  10. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  11. Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.

    Treesearch

    Andrew B. Carey; Scott P. Horton; Janice A. Reid

    1989-01-01

    Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...

  12. Optimal control of HIV/AIDS dynamic: Education and treatment

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  13. Rapid and sensitive analysis of 27 underivatized free amino acids, dipeptides, and tripeptides in fruits of Siraitia grosvenorii Swingle using HILIC-UHPLC-QTRAP(®)/MS (2) combined with chemometrics methods.

    PubMed

    Zhou, Guisheng; Wang, Mengyue; Li, Yang; Peng, Ying; Li, Xiaobo

    2015-08-01

    In the present study, a new strategy based on chemical analysis and chemometrics methods was proposed for the comprehensive analysis and profiling of underivatized free amino acids (FAAs) and small peptides among various Luo-Han-Guo (LHG) samples. Firstly, the ultrasound-assisted extraction (UAE) parameters were optimized using Plackett-Burman (PB) screening and Box-Behnken designs (BBD), and the following optimal UAE conditions were obtained: ultrasound power of 280 W, extraction time of 43 min, and the solid-liquid ratio of 302 mL/g. Secondly, a rapid and sensitive analytical method was developed for simultaneous quantification of 24 FAAs and 3 active small peptides in LHG at trace levels using hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole linear ion-trap tandem mass spectrometry (HILIC-UHPLC-QTRAP(®)/MS(2)). The analytical method was validated by matrix effects, linearity, LODs, LOQs, precision, repeatability, stability, and recovery. Thirdly, the proposed optimal UAE conditions and analytical methods were applied to measurement of LHG samples. It was shown that LHG was rich in essential amino acids, which were beneficial nutrient substances for human health. Finally, based on the contents of the 27 analytes, the chemometrics methods of unsupervised principal component analysis (PCA) and supervised counter propagation artificial neural network (CP-ANN) were applied to differentiate and classify the 40 batches of LHG samples from different cultivated forms, regions, and varieties. As a result, these samples were mainly clustered into three clusters, which illustrated the cultivating disparity among the samples. In summary, the presented strategy had potential for the investigation of edible plants and agricultural products containing FAAs and small peptides.

  14. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  15. Communication: Analytical optimal pulse shapes obtained with the aid of genetic algorithms: Controlling the photoisomerization yield of retinal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, R. D., E-mail: rdguerrerom@unal.edu.co; Arango, C. A., E-mail: caarango@icesi.edu.co; Reyes, A., E-mail: areyesv@unal.edu.co

    We recently proposed a Quantum Optimal Control (QOC) method constrained to build pulses from analytical pulse shapes [R. D. Guerrero et al., J. Chem. Phys. 143(12), 124108 (2015)]. This approach was applied to control the dissociation channel yields of the diatomic molecule KH, considering three potential energy curves and one degree of freedom. In this work, we utilized this methodology to study the strong field control of the cis-trans photoisomerization of 11-cis retinal. This more complex system was modeled with a Hamiltonian comprising two potential energy surfaces and two degrees of freedom. The resulting optimal pulse, made of 6 linearlymore » chirped pulses, was capable of controlling the population of the trans isomer on the ground electronic surface for nearly 200 fs. The simplicity of the pulse generated with our QOC approach offers two clear advantages: a direct analysis of the sequence of events occurring during the driven dynamics, and its reproducibility in the laboratory with current laser technologies.« less

  16. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.

  17. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  18. A new method to optimize natural convection heat sinks

    NASA Astrophysics Data System (ADS)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  19. Experimental and Numerical Optimization of a High-Lift System to Improve Low-Speed Performance, Stability, and Control of an Arrow-Wing Supersonic Transport

    NASA Technical Reports Server (NTRS)

    Hahne, David E.; Glaab, Louis J.

    1999-01-01

    An investigation was performed to evaluate leading-and trailing-edge flap deflections for optimal aerodynamic performance of a High-Speed Civil Transport concept during takeoff and approach-to-landing conditions. The configuration used for this study was designed by the Douglas Aircraft Company during the 1970's. A 0.1-scale model of this configuration was tested in the Langley 30- by 60-Foot Tunnel with both the original leading-edge flap system and a new leading-edge flap system, which was designed with modem computational flow analysis and optimization tools. Leading-and trailing-edge flap deflections were generated for the original and modified leading-edge flap systems with the computational flow analysis and optimization tools. Although wind tunnel data indicated improvements in aerodynamic performance for the analytically derived flap deflections for both leading-edge flap systems, perturbations of the analytically derived leading-edge flap deflections yielded significant additional improvements in aerodynamic performance. In addition to the aerodynamic performance optimization testing, stability and control data were also obtained. An evaluation of the crosswind landing capability of the aircraft configuration revealed that insufficient lateral control existed as a result of high levels of lateral stability. Deflection of the leading-and trailing-edge flaps improved the crosswind landing capability of the vehicle considerably; however, additional improvements are required.

  20. Analytic model for ultrasound energy receivers and their optimal electric loads

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  1. Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.

    PubMed

    Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R

    2015-11-05

    Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Optimization of a Precolumn OPA Derivatization HPLC Assay for Monitoring of l-Asparagine Depletion in Serum during l-Asparaginase Therapy.

    PubMed

    Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui

    2018-06-06

    A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.

  3. Inductance optimization of miniature Broadband transformers with racetrack shaped ferrite cores for Ethernet applications

    NASA Astrophysics Data System (ADS)

    Bowen, David; Krafft, Charles; Mayergoyz, Isaak D.

    2017-05-01

    There is strong commercial interest in the ability to fabricate the windings of traditional miniature wire-wound inductive circuit components, such as Ethernet transformers, lithographically. For greater inductance devices, thick cores are required, making the process of embedding the ferrite material within circuit board one of few options for lithographic winding fabrication. In this paper, a non-traditional core shape, suitable for embedding in circuit board, is examined analytically and experimentally; the racetrack shape is two halves of a toroid connected by straight legs. With regard to the high inductance requirements for Ethernet applications (350μH), the racetrack transformer inductance is analytically optimized, determining the optimal physical dimensions. Two sizes of racetrack-core transformers were fabricated and measured. The measured inductance was in reasonable agreement with the analytical prediction, though large variations in material permeability are expected from the mechanical processing of the ferrite. Some of the experimental transformers were observed to satisfy the Ethernet inductance requirement.

  4. Reactive power optimization strategy considering analytical impedance ratio

    NASA Astrophysics Data System (ADS)

    Wu, Zhongchao; Shen, Weibing; Liu, Jinming; Guo, Maoran; Zhang, Shoulin; Xu, Keqiang; Wang, Wanjun; Sui, Jinlong

    2017-05-01

    In this paper, considering the traditional reactive power optimization cannot realize the continuous voltage adjustment and voltage stability, a dynamic reactive power optimization strategy is proposed in order to achieve both the minimization of network loss and high voltage stability with wind power. Due to the fact that wind power generation is fluctuant and uncertain, electrical equipments such as transformers and shunt capacitors may be operated frequently in order to achieve minimization of network loss, which affect the lives of these devices. In order to solve this problem, this paper introduces the derivation process of analytical impedance ratio based on Thevenin equivalent. Thus, the multiple objective function is proposed to minimize the network loss and analytical impedance ratio. Finally, taking the improved IEEE 33-bus distribution system as example, the result shows that the movement of voltage control equipment has been reduced and network loss increment is controlled at the same time, which proves the applicable value of this strategy.

  5. Adaption of a parallel-path poly(tetrafluoroethylene) nebulizer to an evaporative light scattering detector: Optimization and application to studies of poly(dimethylsiloxane) oligomers as a model polymer.

    PubMed

    Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael

    2018-06-05

    The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.

  6. Utilizing global data to estimate analytical performance on the Sigma scale: A global comparative analysis of methods, instruments, and manufacturers through external quality assurance and proficiency testing programs.

    PubMed

    Westgard, Sten A

    2016-06-01

    To assess the analytical performance of instruments and methods through external quality assessment and proficiency testing data on the Sigma scale. A representative report from five different EQA/PT programs around the world (2 US, 1 Canadian, 1 UK, and 1 Australasian) was accessed. The instrument group standard deviations were used as surrogate estimates of instrument imprecision. Performance specifications from the US CLIA proficiency testing criteria were used to establish a common quality goal. Then Sigma-metrics were calculated to grade the analytical performance. Different methods have different Sigma-metrics for each analyte reviewed. Summary Sigma-metrics estimate the percentage of the chemistry analytes that are expected to perform above Five Sigma, which is where optimized QC design can be implemented. The range of performance varies from 37% to 88%, exhibiting significant differentiation between instruments and manufacturers. Median Sigmas for the different manufacturers in three analytes (albumin, glucose, sodium) showed significant differentiation. Chemistry tests are not commodities. Quality varies significantly from manufacturer to manufacturer, instrument to instrument, and method to method. The Sigma-assessments from multiple EQA/PT programs provide more insight into the performance of methods and instruments than any single program by itself. It is possible to produce a ranking of performance by manufacturer, instrument and individual method. Laboratories seeking optimal instrumentation would do well to consult this data as part of their decision-making process. To confirm that these assessments are stable and reliable, a longer term study should be conducted that examines more results over a longer time period. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  7. Standardization and optimization of fluorescence in situ hybridization (FISH) for HER-2 assessment in breast cancer: A single center experience.

    PubMed

    Bogdanovska-Todorovska, Magdalena; Petrushevska, Gordana; Janevska, Vesna; Spasevska, Liljana; Kostadinova-Kunovska, Slavica

    2018-05-20

    Accurate assessment of human epidermal growth factor receptor 2 (HER-2) is crucial in selecting patients for targeted therapy. Commonly used methods for HER-2 testing are immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH). Here we presented the implementation, optimization and standardization of two FISH protocols using breast cancer samples and assessed the impact of pre-analytical and analytical factors on HER-2 testing. Formalin fixed paraffin embedded (FFPE) tissue samples from 70 breast cancer patients were tested for HER-2 using PathVysion™ HER-2 DNA Probe Kit and two different paraffin pretreatment kits, Vysis/Abbott Paraffin Pretreatment Reagent Kit (40 samples) and DAKO Histology FISH Accessory Kit (30 samples). The concordance between FISH and IHC results was determined. Pre-analytical and analytical factors (i.e., fixation, baking, digestion, and post-hybridization washing) affected the efficiency and quality of hybridization. The overall hybridization success in our study was 98.6% (69/70); the failure rate was 1.4%. The DAKO pretreatment kit was more time-efficient and resulted in more uniform signals that were easier to interpret, compared to the Vysis/Abbott kit. The overall concordance between IHC and FISH was 84.06%, kappa coefficient 0.5976 (p < 0.0001). The greatest discordance (82%) between IHC and FISH was observed in IHC 2+ group. A standardized FISH protocol for HER-2 assessment, with high hybridization efficiency, is necessary due to variability in tissue processing and individual tissue characteristics. Differences in the pre-analytical and analytical steps can affect the hybridization quality and efficiency. The use of DAKO pretreatment kit is time-saving and cost-effective.

  8. Study and characterization of a MEMS micromirror device

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2004-08-01

    In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.

  9. Development of an 19F NMR method for the analysis of fluorinated acids in environmental water samples.

    PubMed

    Ellis, D A; Martin, J W; Muir, D C; Mabury, S A

    2000-02-15

    This investigation was carried out to evaluate 19F NMR as an analytical tool for the measurement of trifluoroacetic acid (TFA) and other fluorinated acids in the aquatic environment. A method based upon strong anionic exchange (SAX) chromatography was also optimized for the concentration of the fluoro acids prior to NMR analysis. Extraction of the analyte from the SAX column was carried out directly in the NMR solvent in the presence of the strong organic base, DBU. The method allowed the analysis of the acid without any prior cleanup steps being involved. Optimal NMR sensitivity based upon T1 relaxation times was investigated for seven fluorinated compounds in four different NMR solvents. The use of the relaxation agent chromium acetylacetonate, Cr(acac)3, within these solvent systems was also evaluated. Results show that the optimal NMR solvent differs for each fluorinated analyte. Cr(acac)3 was shown to have pronounced effects on the limits of detection of the analyte. Generally, the optimal sensitivity condition appears to be methanol-d4/2M DBU in the presence of 4 mg/mL of Cr-(acac)3. The method was validated through spike and recovery for five fluoro acids from environmentally relevant waters. Results are presented for the analysis of TFA in Toronto rainwater, which ranged from < 16 to 850 ng/L. The NMR results were confirmed by GC-MS selected-ion monitoring of the fluoroanalide derivative.

  10. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  11. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    NASA Astrophysics Data System (ADS)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  12. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  13. SP70-alpha-benzoin oxime chelating resin for preconcentration-separation of Pb(II), Cd(II), Co(II) and Cr(III) in environmental samples.

    PubMed

    Narin, Ibrahim; Surme, Yavuz; Bercin, Erdogan; Soylak, Mustafa

    2007-06-25

    In the presented work, alpha-benzoin oxime immobilized SP70 chelating resin was synthesized for separation and preconcentration of Pb(II), Cd(II), Co(II) and Cr(III). The optimization procedure for analytical parameters including pH, eluent type, flow rate, etc. was examined in order to gain quantitative recoveries of analyte ions. The effects of foreign ions on the recoveries of studied metal ions were also investigated. The detection limits (3sigma) were found to be 16.0, 4.2, 1.3, 2.4microgL(-1) for Pb, Cd, Co and Cr, respectively. The preconcentration factor was 75 for Pb, 100 for Cd, Co and Cr. The optimized method was validated with certified reference materials and successfully applied to the waters, crops and pharmaceutical samples with good results (recoveries greater than 95%, R.S.D. lower than 10%).

  14. Analytes and metabolites associated with muscle quality in young, healthy adults

    USDA-ARS?s Scientific Manuscript database

    Purpose: Identification of mechanisms that underlie lower extremity muscle quality (leg press one repetition maximum/total lean mass; LP/Lean) may be important for individuals interested in optimizing fitness and sport performance. The purpose of the current study was to provide observational insigh...

  15. Fast determination of pyrethroid pesticides in tobacco by GC-MS-SIM coupled with modified QuEChERS sample preparation procedure.

    PubMed

    Gao, Yan; Sun, Ying; Jiang, Chunzhu; Yu, Xi; Wang, Yuanpeng; Zhang, Hanqi; Song, Daqian

    2013-01-01

    An analytical method was developed for the extraction and determination of pyrethroid pesticide residues in tobacco. The modified QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method was applied for preparing samples. In this study, methyl cyanide (MeCN)-saturated salt aqueous was used as the two-phase extraction solvent for the first time, and a vortex shaker was used for the simultaneous shaking and concentration of the analytes. The effects of experimental parameters on extraction and clean-up efficiency were investigated and optimized. The analytes were determined by gas chromatography-mass spectrometry-selected ion monitoring (GC-MS-SIM). The obtained recoveries of the analytes at three different fortification levels were 76.85-114.1% and relative standard deviations (RSDs) were lower than 15.7%. The limits of quantification (LOQs) were from 1.28 to 26.6 μg kg(-1). This method was also applied to the analysis of actual commercial tobacco products and the analytical results were satisfactory.

  16. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Extraction, isolation, and purification of analytes from samples of marine origin--a multivariate task.

    PubMed

    Liguori, Lucia; Bjørsvik, Hans-René

    2012-12-01

    The development of a multivariate study for a quantitative analysis of six different polybrominated diphenyl ethers (PBDEs) in tissue of Atlantic Salmo salar L. is reported. An extraction, isolation, and purification process based on an accelerated solvent extraction system was designed, investigated, and optimized by means of statistical experimental design and multivariate data analysis and regression. An accompanying gas chromatography-mass spectrometry analytical method was developed for the identification and quantification of the analytes, BDE 28, BDE 47, BDE 99, BDE 100, BDE 153, and BDE 154. These PBDEs have been used in commercial blends that were used as flame-retardants for a variety of materials, including electronic devices, synthetic polymers and textiles. The present study revealed that an extracting solvent mixture composed of hexane and CH₂Cl₂ (10:90) provided excellent recoveries of all of the six PBDEs studied herein. A somewhat lower polarity in the extracting solvent, hexane and CH₂Cl₂ (40:60) decreased the analyte %-recoveries, which still remain acceptable and satisfactory. The study demonstrates the necessity to perform an intimately investigation of the extraction and purification process in order to achieve quantitative isolation of the analytes from the specific matrix. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  19. Development and optimization of an analytical system for volatile organic compound analysis coming from the heating of interstellar/cometary ice analogues.

    PubMed

    Abou Mrad, Ninette; Duvernay, Fabrice; Theulé, Patrice; Chiavassa, Thierry; Danger, Grégoire

    2014-08-19

    This contribution presents an original analytical system for studying volatile organic compounds (VOC) coming from the heating and/or irradiation of interstellar/cometary ice analogues (VAHIIA system) through laboratory experiments. The VAHIIA system brings solutions to three analytical constraints regarding chromatography analysis: the low desorption kinetics of VOC (many hours) in the vacuum chamber during laboratory experiments, the low pressure under which they sublime (10(-9) mbar), and the presence of water in ice analogues. The VAHIIA system which we developed, calibrated, and optimized is composed of two units. The first is a preconcentration unit providing the VOC recovery. This unit is based on a cryogenic trapping which allows VOC preconcentration and provides an adequate pressure allowing their subsequent transfer to an injection unit. The latter is a gaseous injection unit allowing the direct injection into the GC-MS of the VOC previously transferred from the preconcentration unit. The feasibility of the online transfer through this interface is demonstrated. Nanomoles of VOC can be detected with the VAHIIA system, and the variability in replicate measurements is lower than 13%. The advantages of the GC-MS in comparison to infrared spectroscopy are pointed out, the GC-MS allowing an unambiguous identification of compounds coming from complex mixtures. Beyond the application to astrophysical subjects, these analytical developments can be used for all systems requiring vacuum/cryogenic environments.

  20. Coprecipitation-assisted coacervative extraction coupled to high-performance liquid chromatography: An approach for determining organophosphorus pesticides in water samples.

    PubMed

    Mammana, Sabrina B; Berton, Paula; Camargo, Alejandra B; Lascalea, Gustavo E; Altamirano, Jorgelina C

    2017-05-01

    An analytical methodology based on coprecipitation-assisted coacervative extraction coupled to HPLC-UV was developed for determination of five organophosphorus pesticides (OPPs), including fenitrothion, guthion, parathion, methidathion, and chlorpyrifos, in water samples. It involves a green technique leading to an efficient and simple analytical methodology suitable for high-throughput analysis. Relevant physicochemical variables were studied and optimized on the analytical response of each OPP. Under optimized conditions, the resulting methodology was as follows: an aliquot of 9 mL of water sample was placed into a centrifuge tube and 0.5 mL sodium citrate 0.1 M, pH 4; 0.08 mL Al 2 (SO 4 ) 3 0.1 M; and 0.7 mL SDS 0.1 M were added and homogenized. After centrifugation the supernatant was discarded. A 700 μL aliquot of the coacervate-rich phase obtained was dissolved with 300 μL of methanol and 20 μL of the resulting solution was analyzed by HPLC-UV. The resulting LODs ranged within 0.7-2.5 ng/mL and the achieved RSD and recovery values were <8% (n = 3) and >81%, respectively. The proposed analytical methodology was successfully applied for the analysis of five OPPs in water samples for human consumption of different locations of Mendoza. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Analytical Quality by Design Approach in RP-HPLC Method Development for the Assay of Etofenamate in Dosage Forms

    PubMed Central

    Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.

    2015-01-01

    By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704

  2. Optimization and Verification of Droplet Digital PCR Even-Specific Methods for the Quantification of GM Maize DAS1507 and NK603.

    PubMed

    Grelewska-Nowotko, Katarzyna; Żurawska-Zajfert, Magdalena; Żmijewska, Ewelina; Sowa, Sławomir

    2018-05-01

    In recent years, digital polymerase chain reaction (dPCR), a new molecular biology technique, has been gaining in popularity. Among many other applications, this technique can also be used for the detection and quantification of genetically modified organisms (GMOs) in food and feed. It might replace the currently widely used real-time PCR method (qPCR), by overcoming problems related to the PCR inhibition and the requirement of certified reference materials to be used as a calibrant. In theory, validated qPCR methods can be easily transferred to the dPCR platform. However, optimization of the PCR conditions might be necessary. In this study, we report the transfer of two validated qPCR methods for quantification of maize DAS1507 and NK603 events to the droplet dPCR (ddPCR) platform. After some optimization, both methods have been verified according to the guidance of the European Network of GMO Laboratories (ENGL) on analytical method verification (ENGL working group on "Method Verification." (2011) Verification of Analytical Methods for GMO Testing When Implementing Interlaboratory Validated Methods). Digital PCR methods performed equally or better than the qPCR methods. Optimized ddPCR methods confirm their suitability for GMO determination in food and feed.

  3. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  4. GeneratorSE: A Sizing Tool for Variable-Speed Wind Turbine Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Dykes, Katherine L

    This report documents a set of analytical models employed by the optimization algorithms within the GeneratorSE framework. The initial values and boundary conditions employed for the generation of the various designs and initial estimates for basic design dimensions, masses, and efficiency for the four different models of generators are presented and compared with empirical data collected from previous studies and some existing commercial turbines. These models include designs applicable for variable-speed, high-torque application featuring direct-drive synchronous generators and low-torque application featuring induction generators. In all of the four models presented, the main focus of optimization is electromagnetic design with themore » exception of permanent-magnet and wire-wound synchronous generators, wherein the structural design is also optimized. Thermal design is accommodated in GeneratorSE as a secondary attribute by limiting the winding current densities to acceptable limits. A preliminary validation of electromagnetic design was carried out by comparing the optimized magnetic loading against those predicted by numerical simulation in FEMM4.2, a finite-element software for analyzing electromagnetic and thermal physics problems for electrical machines. For direct-drive synchronous generators, the analytical models for the structural design are validated by static structural analysis in ANSYS.« less

  5. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  6. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    PubMed Central

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  7. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    PubMed

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  8. Micromechanical analysis and design of an integrated thermal protection system for future space vehicles

    NASA Astrophysics Data System (ADS)

    Martinez, Oscar

    Thermal protection systems (TPS) are the key features incorporated into a spacecraft's design to protect it from severe aerodynamic heating during high-speed travel through planetary atmospheres. The thermal protection system is the key technology that enables a spacecraft to be lightweight, fully reusable, and easily maintainable. Add-on TPS concepts have been used since the beginning of the space race. The Apollo space capsule used ablative TPS and the Space Shuttle Orbiter TPS technology consisted of ceramic tiles and blankets. Many problems arose from the add-on concept such as incompatibility, high maintenance costs, non-load bearing, and not being robust and operable. To make the spacecraft's TPS more reliable, robust, and efficient, we investigated Integral Thermal Protection System (ITPS) concept in which the load-bearing structure and the TPS are combined into one single component. The design of an ITPS was a challenging task, because the requirement of a load-bearing structure and a TPS are often conflicting. Finite element (FE) analysis is often the preferred method of choice for a structural analysis problem. However, as the structure becomes complex, the computational time and effort for an FE analysis increases. New structural analytical tools were developed, or available ones were modified, to perform a full structural analysis of the ITPS. With analytical tools, the designer is capable of obtaining quick and accurate results and has a good idea of the response of the structure without having to go to an FE analysis. A MATLABRTM code was developed to analytically determine performance metrics of the ITPS such as stresses, buckling, deflection, and other failure modes. The analytical models provide fast and accurate results that were within 5% difference from the FEM results. The optimization procedure usually performs 100 function evaluations for every design variable. Using the analytical models in the optimization procedure was a time saver, because the optimization time to reach an optimum design was reached in less than an hour, where as an FE optimization study would take hours to reach an optimum design. Corrugated-core structures were designed for ITPS applications with loads and boundary conditions similar to that of a Space Shuttle-like vehicle. Temperature, buckling, deflection and stress constraints were considered for the design and optimization process. An optimized design was achieved with consideration of all the constraints. The ITPS design obtained from the analytical solutions was lighter (4.38 lb/ft2) when compared to the ITPS design obtained from a finite element analysis (4.85 lb/ft 2). The ITPS boundary effects added local stresses and compressive loads to the top facesheet that was not able to be captured by the 2D plate solutions. The inability to fully capture the boundary effects lead to a lighter ITPS when compared to the FE solution. However, the ITPS can withstand substantially large mechanical loads when compared to the previous designs. Truss-core structures were found to be unsuitable as they could not withstand the large thermal gradients frequently encountered in ITPS applications.

  9. Aerothermodynamic shape optimization of hypersonic blunt bodies

    NASA Astrophysics Data System (ADS)

    Eyi, Sinan; Yumuşak, Mine

    2015-07-01

    The aim of this study is to develop a reliable and efficient design tool that can be used in hypersonic flows. The flow analysis is based on the axisymmetric Euler/Navier-Stokes and finite-rate chemical reaction equations. The equations are coupled simultaneously and solved implicitly using Newton's method. The Jacobian matrix is evaluated analytically. A gradient-based numerical optimization is used. The adjoint method is utilized for sensitivity calculations. The objective of the design is to generate a hypersonic blunt geometry that produces the minimum drag with low aerodynamic heating. Bezier curves are used for geometry parameterization. The performances of the design optimization method are demonstrated for different hypersonic flow conditions.

  10. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  11. Application of support vector regression for optimization of vibration flow field of high-density polyethylene melts characterized by small angle light scattering

    NASA Astrophysics Data System (ADS)

    Xian, Guangming

    2018-03-01

    In this paper, the vibration flow field parameters of polymer melts in a visual slit die are optimized by using intelligent algorithm. Experimental small angle light scattering (SALS) patterns are shown to characterize the processing process. In order to capture the scattered light, a polarizer and an analyzer are placed before and after the polymer melts. The results reported in this study are obtained using high-density polyethylene (HDPE) with rotation speed at 28 rpm. In addition, support vector regression (SVR) analytical method is introduced for optimization the parameters of vibration flow field. This work establishes the general applicability of SVR for predicting the optimal parameters of vibration flow field.

  12. Data and Tools | Concentrating Solar Power | NREL

    Science.gov Websites

    download. Solar Power tower Integrated Layout and Optimization Tool (SolarPILOT(tm)) The SolarPILOT is code rapid layout and optimization capability of the analytical DELSOL3 program with the accuracy and

  13. Optimization of microwave-assisted extraction and supercritical fluid extraction of carbamate pesticides in soil by experimental design methodology.

    PubMed

    Sun, Lei; Lee, Hian Kee

    2003-10-03

    Orthogonal array design (OAD) was applied for the first time to optimize microwave-assisted extraction (MAE) and supercritical fluid extraction (SFE) conditions for the analysis of four carbamates (propoxur, propham, methiocarb, chlorpropham) from soil. The theory and methodology of a new OA16 (4(4)) matrix derived from a OA16 (2(15)) matrix were developed during the MAE optimization. An analysis of variance technique was employed as the data analysis strategy in this study. Determinations of analytes were completed using high-performance liquid chromatography (HPLC) with UV detection. Four carbamates were successfully extracted from soil with recoveries ranging from 85 to 105% with good reproducibility (approximately 4.9% RSD) under the optimum MAE conditions: 30 ml methanol, 80 degrees C extraction temperature, and 6-min microwave heating. An OA8 (2(7)) matrix was employed for the SFE optimization. The average recoveries and RSD of the analytes from spiked soil by SFE were 92 and 5.5%, respectively except for propham (66.3+/-7.9%), under the following conditions: heating for 30 min at 60 degrees C under supercritical CO2 at 300 kg/cm2 modified with 10% (v/v) methanol. The composition of the supercritical fluid was demonstrated to be a crucial factor in the extraction. The addition of a small volume (10%) of methanol to CO2 greatly enhanced the recoveries of carbamates. A comparison of MAE with SFE was also conducted. The results indicated that >85% average recoveries were obtained by both optimized extraction techniques, and slightly higher recoveries of three carbamates (propoxur, propham and methiocarb) were achieved using MAE. SFE showed slightly higher recovery for chlorpropham (93 vs. 87% for MAE). The effects of time-aged soil on the extraction of analytes were examined and the results obtained by both methods were also compared.

  14. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  15. Evaluation of a reduced centrifugation time and higher centrifugal force on various general chemistry and immunochemistry analytes in plasma and serum.

    PubMed

    Møller, Mette F; Søndergaard, Tove R; Kristensen, Helle T; Münster, Anna-Marie B

    2017-09-01

    Background Centrifugation of blood samples is an essential preanalytical step in the clinical biochemistry laboratory. Centrifugation settings are often altered to optimize sample flow and turnaround time. Few studies have addressed the effect of altering centrifugation settings on analytical quality, and almost all studies have been done using collection tubes with gel separator. Methods In this study, we compared a centrifugation time of 5 min at 3000 ×  g to a standard protocol of 10 min at 2200 ×  g. Nine selected general chemistry and immunochemistry analytes and interference indices were studied in lithium heparin plasma tubes and serum tubes without gel separator. Results were evaluated using mean bias, difference plots and coefficient of variation, compared with maximum allowable bias and coefficient of variation used in laboratory routine quality control. Results For all analytes except lactate dehydrogenase, the results were within the predefined acceptance criteria, indicating that the analytical quality was not compromised. Lactate dehydrogenase showed higher values after centrifugation for 5 min at 3000 ×  g, mean bias was 6.3 ± 2.2% and the coefficient of variation was 5%. Conclusions We found that a centrifugation protocol of 5 min at 3000 ×  g can be used for the general chemistry and immunochemistry analytes studied, with the possible exception of lactate dehydrogenase, which requires further assessment.

  16. An analytical/numerical correlation study of the multiple concentric cylinder model for the thermoplastic response of metal matrix composites

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1993-01-01

    The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.

  17. A Crowdsensing Based Analytical Framework for Perceptional Degradation of OTT Web Browsing.

    PubMed

    Li, Ke; Wang, Hai; Xu, Xiaolong; Du, Yu; Liu, Yuansheng; Ahmad, M Omair

    2018-05-15

    Service perception analysis is crucial for understanding both user experiences and network quality as well as for maintaining and optimizing of mobile networks. Given the rapid development of mobile Internet and over-the-top (OTT) services, the conventional network-centric mode of network operation and maintenance is no longer effective. Therefore, developing an approach to evaluate and optimizing users' service perceptions has become increasingly important. Meanwhile, the development of a new sensing paradigm, mobile crowdsensing (MCS), makes it possible to evaluate and analyze the user's OTT service perception from end-user's point of view other than from the network side. In this paper, the key factors that impact users' end-to-end OTT web browsing service perception are analyzed by monitoring crowdsourced user perceptions. The intrinsic relationships among the key factors and the interactions between key quality indicators (KQI) are evaluated from several perspectives. Moreover, an analytical framework of perceptional degradation and a detailed algorithm are proposed whose goal is to identify the major factors that impact the perceptional degradation of web browsing service as well as their significance of contribution. Finally, a case study is presented to show the effectiveness of the proposed method using a dataset crowdsensed from a large number of smartphone users in a real mobile network. The proposed analytical framework forms a valuable solution for mobile network maintenance and optimization and can help improve web browsing service perception and network quality.

  18. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  19. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  20. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  1. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  2. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  3. On-line solid-phase microextraction of triclosan, bisphenol A, chlorophenols, and selected pharmaceuticals in environmental water samples by high-performance liquid chromatography-ultraviolet detection.

    PubMed

    Kim, Dalho; Han, Jungho; Choi, Yongwook

    2013-01-01

    A method using on-line solid-phase microextraction (SPME) on a carbowax-templated fiber followed by liquid chromatography (LC) with ultraviolet (UV) detection was developed for the determination of triclosan in environmental water samples. Along with triclosan, other selected phenolic compounds, bisphenol A, and acidic pharmaceuticals were studied. Previous SPME/LC or stir-bar sorptive extraction/LC-UV for polar analytes showed lack of sensitivity. In this study, the calculated octanol-water distribution coefficient (log D) values of the target analytes at different pH values were used to estimate polarity of the analytes. The lack of sensitivity observed in earlier studies is identified as a lack of desorption by strong polar-polar interactions between analyte and solid-phase. Calculated log D values were useful to understand or predict the interaction between analyte and solid phase. Under the optimized conditions, the method detection limit of selected analytes by using on-line SPME-LC-UV method ranged from 5 to 33 ng L(-1), except for very polar 3-chlorophenol and 2,4-dichlorophenol which was obscured in wastewater samples by an interfering substance. This level of detection represented a remarkable improvement over the conventional existing methods. The on-line SPME-LC-UV method, which did not require derivatization of analytes, was applied to the determination of TCS including phenolic compounds and acidic pharmaceuticals in tap water and river water and municipal wastewater samples.

  4. Optimizing the Usability of Mobile Phones for Individuals Who Are Deaf

    ERIC Educational Resources Information Center

    Liu, Chien-Hsiou; Chiu, Hsiao-Ping; Hsieh, Ching-Lin; Li, Rong-Kwer

    2010-01-01

    Mobile phones are employed as an assistive platform to improve the living quality of individuals who are deaf. However, deaf individuals experience difficulties using existing functions on mobile phones. This study identifies the functions that are inadequate and insufficient for deaf individuals using existing mobile phones. Analytical results…

  5. Optimizing the Long-Term Retention of Skills: Structural and Analytic Approaches to Skill Maintenance

    DTIC Science & Technology

    1990-08-01

    evidence for a surprising degree of long-term skill retention. We formulated a theoretical framework , focusing on the importance of procedural reinstatement...considerable forgetting over even relatively short retention intervals. We have been able to place these studies in the same general theoretical framework developed

  6. An Analytical Framework for Internationalization through English-Taught Degree Programs: A Dutch Case Study

    ERIC Educational Resources Information Center

    Kotake, Masako

    2017-01-01

    The growing importance of internationalization and the global dominance of English in higher education mean pressures on expanding English-taught degree programs (ETDPs) in non-English-speaking countries. Strategic considerations are necessary to successfully integrate ETDPs into existing programs and to optimize the effects of…

  7. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.

  8. Combining Simulation and Optimization Models for Hardwood Lumber Production

    Treesearch

    G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman

    1991-01-01

    Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...

  9. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  10. Simplex optimization of headspace factors for headspace gas chromatography determination of residual solvents in pharmaceutical products.

    PubMed

    Grodowska, Katarzyna; Parczewski, Andrzej

    2013-01-01

    The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).

  11. Evaluation of plasma proteomic data for Alzheimer disease state classification and for the prediction of progression from mild cognitive impairment to Alzheimer disease.

    PubMed

    Llano, Daniel A; Devanarayan, Viswanath; Simon, Adam J

    2013-01-01

    Previous studies that have examined the potential for plasma markers to serve as biomarkers for Alzheimer disease (AD) have studied single analytes and focused on the amyloid-β and τ isoforms and have failed to yield conclusive results. In this study, we performed a multivariate analysis of 146 plasma analytes (the Human DiscoveryMAP v 1.0 from Rules-Based Medicine) in 527 subjects with AD, mild cognitive impairment (MCI), or cognitively normal elderly subjects from the Alzheimer's Disease Neuroimaging Initiative database. We identified 4 different proteomic signatures, each using 5 to 14 analytes, that differentiate AD from control patients with sensitivity and specificity ranging from 74% to 85%. Five analytes were common to all 4 signatures: apolipoprotein A-II, apolipoprotein E, serum glutamic oxaloacetic transaminase, α-1-microglobulin, and brain natriuretic peptide. None of the signatures adequately predicted progression from MCI to AD over a 12- and 24-month period. A new panel of analytes, optimized to predict MCI to AD conversion, was able to provide 55% to 60% predictive accuracy. These data suggest that a simple panel of plasma analytes may provide an adjunctive tool to differentiate AD from controls, may provide mechanistic insights to the etiology of AD, but cannot adequately predict MCI to AD conversion.

  12. Systematic Development and Validation of a Thin-Layer Densitometric Bioanalytical Method for Estimation of Mangiferin Employing Analytical Quality by Design (AQbD) Approach

    PubMed Central

    Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O.P.; Singh, Bhupinder

    2016-01-01

    The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett–Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm with Rf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50–800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. PMID:26912808

  13. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  14. Optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative Lévy processes: An alternative approach

    NASA Astrophysics Data System (ADS)

    Yin, Chuancun; Wang, Chunwei

    2009-11-01

    The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.

  15. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.

  16. Rational selection of substrates to improve color intensity and uniformity on microfluidic paper-based analytical devices.

    PubMed

    Evans, Elizabeth; Gabriel, Ellen Flávia Moreira; Coltro, Wendell Karlos Tomazelli; Garcia, Carlos D

    2014-05-07

    A systematic investigation was conducted to study the effect of paper type on the analytical performance of a series of microfluidic paper-based analytical devices (μPADs) fabricated using a CO2 laser engraver. Samples included three different grades of Whatman chromatography paper, and three grades of Whatman filter paper. According to the data collected and the characterization performed, different papers offer a wide range of flow rate, thickness, and pore size. After optimizing the channel widths on the μPAD, the focus of this study was directed towards the color intensity and color uniformity formed during a colorimetric enzymatic reaction. According to the results herein described, the type of paper and the volume of reagents dispensed in each detection zone can determine the color intensity and uniformity. Therefore, the objective of this communication is to provide rational guidelines for the selection of paper substrates for the fabrication of μPADs.

  17. An analytical procedure for the determination of aluminum used in antiperspirants on human skin in Franz™ diffusion cell.

    PubMed

    Guillard, Olivier; Fauconneau, Bernard; Favreau, Frédéric; Marrauld, Annie; Pineau, Alain

    2012-04-01

    A local case report of hyperaluminemia (aluminum concentration: 3.88 µmol/L) in a woman using an aluminum-containing antiperspirant for 4 years raises the question of possible transdermal uptake of aluminum salt as a future public health problem. Prior to studying the transdermal uptake of three commercialized cosmetic formulas, an analytical assay of aluminum (Al) in chlorohydrate form (ACH) by Zeeman Electrothermal Atomic Absorption Spectrophotometer (ZEAAS) in a clean room was optimized and validated. This analysis was performed with different media on human skin using a Franz(™) diffusion cell. The detection and quantification limits were set at ≤ 3 µg/L. Precision analysis as within-run (n = 12) and between-run (n = 15-68 days) yield CV ≤ 6%. The high analytic sensitivity (2-3 µg/L) and low variability should allow an in vitro study of the transdermal uptake of ACH.

  18. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features.

    PubMed

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-12-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  19. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features

    NASA Astrophysics Data System (ADS)

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-04-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  20. Dynamic characteristics of stay cables with inerter dampers

    NASA Astrophysics Data System (ADS)

    Shi, Xiang; Zhu, Songye

    2018-06-01

    This study systematically investigates the dynamic characteristics of a stay cable with an inerter damper installed close to one end of a cable. The interest in applying inerter dampers to stay cables is partially inspired by the superior damping performance of negative stiffness dampers in the same application. A comprehensive parametric study on two major parameters, namely, inertance and damping coefficients, are conducted using analytical and numerical approaches. An inerter damper can be optimized for one vibration mode of a stay cable by generating identical wave numbers in two adjacent modes. An optimal design approach is proposed for inerter dampers installed on stay cables. The corresponding optimal inertance and damping coefficients are summarized for different damper locations and interested modes. Inerter dampers can offer better damping performance than conventional viscous dampers for the target mode of a stay cable that requires optimization. However, additional damping ratios in other vibration modes through inerter damper are relatively limited.

  1. Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect

    NASA Astrophysics Data System (ADS)

    Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2018-06-01

    Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.

  2. Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Bolonkin, Alexander; Gilyard, Glenn B.

    1999-01-01

    Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.

  3. Retention prediction and separation optimization under multilinear gradient elution in liquid chromatography with Microsoft Excel macros.

    PubMed

    Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P

    2015-05-22

    A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  5. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    PubMed

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  7. Towards an Analytic Foundation for Network Architecture

    DTIC Science & Technology

    2010-12-31

    SUPPLEMENTARY NOTES N/A 14. ABSTRACT In this project, we develop the analytic tools of stochastic optimization for wireless network design and apply them...and Mung Chiang, “ DaVinci : Dynamically Adaptive Virtual Networks for a Customized Internet,” in Proc. ACM SIGCOMM CoNext Conference, December 2008

  8. Optimization of turning process through the analytic flank wear modelling

    NASA Astrophysics Data System (ADS)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  9. An assessment of separable fluid connector system parameters to perform a connector system design optimization study

    NASA Technical Reports Server (NTRS)

    Prasthofer, W. P.

    1974-01-01

    The key to optimization of design where there are a large number of variables, all of which may not be known precisely, lies in the mathematical tool of dynamic programming developed by Bellman. This methodology can lead to optimized solutions to the design of critical systems in a minimum amount of time, even when there are a great number of acceptable configurations to be considered. To demonstrate the usefulness of dynamic programming, an analytical method is developed for evaluating the relationship among existing numerous connector designs to find the optimum configuration. The data utilized in the study were generated from 900 flanges designed for six subsystems of the S-1B stage of the Saturn 1B space carrier vehicle.

  10. SSME single-crystal turbine blade dynamics

    NASA Technical Reports Server (NTRS)

    Moss, Larry A.

    1988-01-01

    A study was performrd to determine the dynamic characteristics of the Space Shuttle Main Engine high pressure fuel turbopump (HPFTP) blades made of single crystal (SC) material. The first and second stage drive turbine blades of HPFTP were examined. The nonrotating natural frequencies were determined experimentally and analytically. The experimental results of the SC second stage blade were used to verify the analytical procedures. The study examined the SC first stage blade natural frequencies with respect to crystal orientation at typical operating conditions. The SC blade dynamic response was predicted to be less than the directionally solidified base. Crystal axis orientation optimization indicated that the third mode interference will exist in any SC orientation.

  11. Gear and Transmission Research at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Townsend, Dennis P.

    1997-01-01

    This paper is a review of some of the research work of the NASA Lewis Research Center Mechanical Components Branch. It includes a brief review of the NASA Lewis Research Center and the Mechanical Components Branch. The research topics discussed are crack propagation of gear teeth, gear noise of spiral bevel and other gears, design optimization methods, methods we have investigated for transmission diagnostics, the analytical and experimental study of gear thermal conditions, the analytical and experimental study of split torque systems, the evaluation of several new advanced gear steels and transmission lubricants and the evaluation of various aircraft transmissions. The area of research needs for gearing and transmissions is also discussed.

  12. Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram K.

    2010-01-01

    A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.

  13. Research on Collection System Optimal Design of Wind Farm with Obstacles

    NASA Astrophysics Data System (ADS)

    Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.

    2017-05-01

    To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.

  14. Quantum teleportation scheme by selecting one of multiple output ports

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi; Hiroshima, Tohya

    2009-04-01

    The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.

  15. Optimizing the Compressive Strength of Strain-Hardenable Stretch-Formed Microtruss Architectures

    NASA Astrophysics Data System (ADS)

    Yu, Bosco; Abu Samk, Khaled; Hibbard, Glenn D.

    2015-05-01

    The mechanical performance of stretch-formed microtrusses is determined by both the internal strut architecture and the accumulated plastic strain during fabrication. The current study addresses the question of optimization, by taking into consideration the interdependency between fabrication path, material properties and architecture. Low carbon steel (AISI1006) and aluminum (AA3003) material systems were investigated experimentally, with good agreement between measured values and the analytical model. The compressive performance of the microtrusses was then optimized on a minimum weight basis under design constraints such as fixed starting sheet thickness and final microtruss height by satisfying the Karush-Kuhn-Tucker condition. The optimization results were summarized as carpet plots in order to meaningfully visualize the interdependency between architecture, microstructural state, and mechanical performance, enabling material and processing path selection.

  16. QbD-oriented development and validation of a bioanalytical method for nevirapine with enhanced liquid-liquid extraction and chromatographic separation.

    PubMed

    Beg, Sarwar; Chaudhary, Vandna; Sharma, Gajanand; Garg, Babita; Panda, Sagar Suman; Singh, Bhupinder

    2016-06-01

    The present studies describe the systematic quality by design (QbD)-oriented development and validation of a simple, rapid, sensitive and cost-effective reversed-phase HPLC bioanalytical method for nevirapine in rat plasma. Chromatographic separation was carried out on a C18 column using isocratic 68:9:23% v/v elution of methanol, acetonitrile and water (pH 3, adjusted by orthophosphoric acid) at a flow rate of 1.0 mL/min using UV detection at 230 nm. A Box-Behnken design was applied for chromatographic method optimization taking mobile phase ratio, pH and flow rate as the critical method parameters (CMPs) from screening studies. Peak area, retention time, theoretical plates and peak tailing were measured as the critical analytical attributes (CAAs). Further, the bioanalytical liquid-liquid extraction process was optimized using an optimal design by selecting extraction time, centrifugation speed and temperature as the CMPs for percentage recovery of nevirapine as the CAA. The search for an optimum chromatographic solution was conducted through numerical desirability function. Validation studies performed as per the US Food and Drug Administration requirements revealed results within the acceptance limit. In a nutshell, the studies successfully demonstrate the utility of analytical QbD approach for the rational development of a bioanalytical method with enhanced chromatographic separation and recovery of nevirapine in rat plasma. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  18. Measurement of nicotine in household dust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sungroul; Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205; Aung, Ther

    An analytical method of measuring nicotine in house dust was optimized and associations among three secondhand smoking exposure markers were evaluated, i.e., nicotine concentrations of both house dust and indoor air, and the self-reported number of cigarettes smoked daily in a household. We obtained seven house dust samples from self-reported nonsmoking homes and 30 samples from smoking homes along with the information on indoor air nicotine concentrations and the number of cigarettes smoked daily from an asthma cohort study conducted by the Johns Hopkins Center for Childhood Asthma in the Urban Environment. House dust nicotine was analyzed by isotope dilutionmore » gas chromatography-mass spectrometry (GC/MS). Using our optimized method, the median concentration of nicotine in the dust of self-reported nonsmoking homes was 11.7 ng/mg while that of smoking homes was 43.4 ng/mg. We found a substantially positive association (r=0.67, P<0.0001) between house dust nicotine concentrations and the numbers of cigarettes smoked daily. Optimized analytical methods showed a feasibility to detect nicotine in house dust. Our results indicated that the measurement of nicotine in house dust can be used potentially as a marker of longer term SHS exposure.« less

  19. Optimization of the Water Volume in the Buckets of Pico Hydro Overshot Waterwheel by Analytical Method

    NASA Astrophysics Data System (ADS)

    Budiarso; Adanta, Dendy; Warjito; Siswantara, A. I.; Saputra, Pradhana; Dianofitra, Reza

    2018-03-01

    Rapid economic and population growth in Indonesia lead to increased energy consumption, including electricity needs. Pico hydro is considered as the right solution because the cost of investment and operational cost are fairly low. Additionally, Indonesia has many remote areas with high hydro-energy potential. The overshot waterwheel is one of technology that is suitable to be applied in remote areas due to ease of operation and maintenance. This study attempts to optimize bucket dimensions with the available conditions. In addition, the optimization also has a good impact on the amount of generated power because all available energy is utilized maximally. Analytical method is used to evaluate the volume of water contained in bucket overshot waterwheel. In general, there are two stages performed. First, calculation of the volume of water contained in each active bucket is done. If the amount total of water contained is less than the available discharge in active bucket, recalculation at the width of the wheel is done. Second, calculation of the torque of each active bucket is done to determine the power output. As the result, the mechanical power generated from the waterwheel is 305 Watts with the efficiency value of 28%.

  20. Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-Fat; Pak, Chan-Gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  1. Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  2. The cost of model reference adaptive control - Analysis, experiments, and optimization

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  3. Extraction optimization and identification of anthocyanins from Nitraria tangutorun Bobr. seed meal and establishment of a green analytical method of anthocyanins.

    PubMed

    Sang, Jun; Sang, Jie; Ma, Qun; Hou, Xiao-Fang; Li, Cui-Qin

    2017-03-01

    This study aimed to extract and identify anthocyanins from Nitraria tangutorun Bobr. seed meal and establish a green analytical method of anthocyanins. Ultrasound-assisted extraction of anthocyanins from N. tangutorun seed meal was optimized using response surface methodology. Extraction at 70°C for 32.73 min using 51.15% ethanol rendered an extract with 65.04mg/100g of anthocyanins and 947.39mg/100g of polyphenols. An in vitro antioxidant assay showed that the extract exhibited a potent DPPH radical-scavenging capacity. Eight anthocyanins in N. tangutorun seed meal were identified by HPLC-MS, and the main anthocyanin was cyanidin-3-O-(trans-p-coumaroyl)-diglucoside (18.17mg/100g). A green HPLC-DAD method was developed to analyse anthocyanins. A mixtures of ethanol and a 5% (v/v) formic acid aqueous solution at a 20:80 (v/v) ratio was used as the optimized mobile phase. The method was accurate, stable and reliable and could be used to investigate anthocyanins from N. tangutorun seed meal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Hybrid computational and experimental approach for the study and optimization of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1998-05-01

    Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.

  5. Development of a validated liquid chromatographic method for quantification of sorafenib tosylate in the presence of stress-induced degradation products and in biological matrix employing analytical quality by design approach.

    PubMed

    Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder

    2018-05-01

    The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any significant change in the retention time of SFN and degradation products, formed under different stress conditions, ratified selectivity and specificity of the systematically developed method. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Using design of experiments to optimize derivatization with methyl chloroformate for quantitative analysis of the aqueous phase from hydrothermal liquefaction of biomass.

    PubMed

    Madsen, René Bjerregaard; Jensen, Mads Mørk; Mørup, Anders Juul; Houlberg, Kasper; Christensen, Per Sigaard; Klemmer, Maika; Becker, Jacob; Iversen, Bo Brummerstedt; Glasius, Marianne

    2016-03-01

    Hydrothermal liquefaction is a promising technique for the production of bio-oil. The process produces an oil phase, a gas phase, a solid residue, and an aqueous phase. Gas chromatography coupled with mass spectrometry is used to analyze the complex aqueous phase. Especially small organic acids and nitrogen-containing compounds are of interest. The efficient derivatization reagent methyl chloroformate was used to make analysis of the complex aqueous phase from hydrothermal liquefaction of dried distillers grains with solubles possible. A circumscribed central composite design was used to optimize the responses of both derivatized and nonderivatized analytes, which included small organic acids, pyrazines, phenol, and cyclic ketones. Response surface methodology was used to visualize significant factors and identify optimized derivatization conditions (volumes of methyl chloroformate, NaOH solution, methanol, and pyridine). Twenty-nine analytes of small organic acids, pyrazines, phenol, and cyclic ketones were quantified. An additional three analytes were pseudoquantified with use of standards with similar mass spectra. Calibration curves with high correlation coefficients were obtained, in most cases R (2)  > 0.991. Method validation was evaluated with repeatability, and spike recoveries of all 29 analytes were obtained. The 32 analytes were quantified in samples from the commissioning of a continuous flow reactor and in samples from recirculation experiments involving the aqueous phase. The results indicated when the steady-state condition of the flow reactor was obtained and the effects of recirculation. The validated method will be especially useful for investigations of the effect of small organic acids on the hydrothermal liquefaction process.

  7. Analytical Model and Optimized Design of Power Transmitting Coil for Inductively Coupled Endoscope Robot.

    PubMed

    Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai

    2016-04-01

    A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%.

  8. A Nonlinear Programming Perspective on Sensitivity Calculations for Systems Governed by State Equations

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    This paper discusses the calculation of sensitivities. or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the first and second derivatives associated with such problems and the relation of these derivatives to implicit differentiation and equality constrained optimization. We also outline an error analysis of the analytical formulae and compare the results with similar results for finite-difference estimates of derivatives. We then attend to an investigation of the nature of the adjoint method and the adjoint equations and their relation to directions of steepest descent. We illustrate the points discussed with an optimization problem in which the variables are the coefficients in a differential operator.

  9. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  10. Optimization of levitation and guidance forces in a superconducting Maglev system

    NASA Astrophysics Data System (ADS)

    Yildizer, Irfan; Cansiz, Ahmet; Ozturk, Kemal

    2016-09-01

    Optimization of the levitation for superconducting Maglev systems requires effective use of vertical and guidance forces during the operation. In this respect the levitation and guidance forces in terms of various permanent magnet array configurations are analyzed. The arrangements of permanent magnet arrays interacting with the superconductor are configured for the purpose of increasing the magnetic flux density. According to configurations, modeling the interaction forces between the permanent magnet and the superconductor are established in terms of the frozen image model. The model is complemented with the analytical calculations and provides a reasonable agreement with the experiments. The agreement of the analytical calculation associated with the frozen image model indicates a strong case to establish an optimization, in which provides preliminary analysis before constructing more complex Maglev system.

  11. Analysis of 40 conventional and emerging disinfection by-products in fresh-cut produce wash water by modified EPA methods.

    PubMed

    Lee, Wan-Ning; Huang, Ching-Hua; Zhu, Guangxuan

    2018-08-01

    Chlorine sanitizers used in washing fresh and fresh-cut produce can lead to generation of disinfection by-products (DBPs) that are harmful to human health. Monitoring of DBPs is necessary to protect food safety but comprehensive analytical methods have been lacking. This study has optimized three U.S. Environmental Protection Agency methods for drinking water DBPs to improve their performance for produce wash water. The method development encompasses 40 conventional and emerging DBPs. Good recoveries (60-130%) were achieved for most DBPs in deionized water and in lettuce, strawberry and cabbage wash water. The method detection limits are in the range of 0.06-0.58 μg/L for most DBPs and 10-24 ng/L for nitrosamines in produce wash water. Preliminary results revealed the formation of many DBPs when produce is washed with chlorine. The optimized analytical methods by this study effectively reduce matrix interference and can serve as useful tools for future research on food DBPs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Design and Validation of In-Source Atmospheric Pressure Photoionization Hydrogen/Deuterium Exchange Mass Spectrometry with Continuous Feeding of D2O.

    PubMed

    Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan

    2018-01-01

    In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D 2 O was developed and validated. D 2 O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D 2 O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. Graphical abstract ᅟ.

  13. Fast UPLC/PDA determination of squalene in Sicilian P.D.O. pistachio from Bronte: Optimization of oil extraction method and analytical characterization.

    PubMed

    Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo

    2017-04-15

    A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL -1 level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg -1 . Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Cluster Size Optimization in Sensor Networks with Decentralized Cluster-Based Protocols

    PubMed Central

    Amini, Navid; Vahdatpour, Alireza; Xu, Wenyao; Gerla, Mario; Sarrafzadeh, Majid

    2011-01-01

    Network lifetime and energy-efficiency are viewed as the dominating considerations in designing cluster-based communication protocols for wireless sensor networks. This paper analytically provides the optimal cluster size that minimizes the total energy expenditure in such networks, where all sensors communicate data through their elected cluster heads to the base station in a decentralized fashion. LEACH, LEACH-Coverage, and DBS comprise three cluster-based protocols investigated in this paper that do not require any centralized support from a certain node. The analytical outcomes are given in the form of closed-form expressions for various widely-used network configurations. Extensive simulations on different networks are used to confirm the expectations based on the analytical results. To obtain a thorough understanding of the results, cluster number variability problem is identified and inspected from the energy consumption point of view. PMID:22267882

  15. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  16. Many-objective reservoir policy identification and refinement to reduce policy inertia and myopia in water management

    NASA Astrophysics Data System (ADS)

    Giuliani, M.; Herman, J. D.; Castelletti, A.; Reed, P.

    2014-04-01

    This study contributes a decision analytic framework to overcome policy inertia and myopia in complex river basin management contexts. The framework combines reservoir policy identification, many-objective optimization under uncertainty, and visual analytics to characterize current operations and discover key trade-offs between alternative policies for balancing competing demands and system uncertainties. The approach is demonstrated on the Conowingo Dam, located within the Lower Susquehanna River, USA. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. We have identified a baseline operating policy for the Conowingo Dam that closely reproduces the dynamics of current releases and flows for the Lower Susquehanna and thus can be used to represent the preferences structure guiding current operations. Starting from this baseline policy, our proposed decision analytic framework then combines evolutionary many-objective optimization with visual analytics to discover new operating policies that better balance the trade-offs within the Lower Susquehanna. Our results confirm that the baseline operating policy, which only considers deterministic historical inflows, significantly overestimates the system's reliability in meeting the reservoir's competing demands. Our proposed framework removes this bias by successfully identifying alternative reservoir policies that are more robust to hydroclimatic uncertainties while also better addressing the trade-offs across the Conowingo Dam's multisector services.

  17. Design considerations for near-infrared filter photometry: effects of noise sources and selectivity.

    PubMed

    Tarumi, Toshiyasu; Amerov, Airat K; Arnold, Mark A; Small, Gary W

    2009-06-01

    Optimal filter design of two-channel near-infrared filter photometers is investigated for simulated two-component systems consisting of an analyte and a spectrally overlapping interferent. The degree of overlap between the analyte and interferent bands is varied over three levels. The optimal design is obtained for three cases: a source or background flicker noise limited case, a shot noise limited case, and a detector noise limited case. Conventional photometers consist of narrow-band optical filters with their bands located at discrete wavelengths. However, the use of broadband optical filters with overlapping responses has been proposed to obtain as much signal as possible from a weak and broad analyte band typical of near-infrared absorptions. One question regarding the use of broadband optical filters with overlapping responses is the selectivity achieved by such filters. The selectivity of two-channel photometers is evaluated on the basis of the angle between the analyte and interferent vectors in the space spanned by the relative change recorded for each of the two detector channels. This study shows that for the shot noise limited or detector noise limited cases, the slight decrease in selectivity with the use of broadband optical filters can be compensated by the higher signal-to-noise ratio afforded by the use of such filters. For the source noise limited case, the best quantitative results are obtained with the use of narrow-band non-overlapping optical filters.

  18. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  19. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  20. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  1. Optimization of supersonic axisymmetric nozzles with a center body for aerospace propulsion

    NASA Astrophysics Data System (ADS)

    Davidenko, D. M.; Eude, Y.; Falempin, F.

    2011-10-01

    This study is aimed at optimization of axisymmetric nozzles with a center body, which are suitable for thrust engines having an annular duct. To determine the flow conditions and nozzle dimensions, the Vinci rocket engine is chosen as a prototype. The nozzle contours are described by 2nd and 3rd order analytical functions and specified by a set of geometrical parameters. A direct optimization method is used to design maximum thrust nozzle contours. During optimization, the flow of multispecies reactive gas is simulated by an Euler code. Several optimized contours have been obtained for the center body diameter ranging from 0.2 to 0.4 m. For these contours, Navier-Stokes (NS) simulations have been performed to take into account viscous effects assuming adiabatic and cooled wall conditions. The paper presents an analysis of factors influencing the nozzle thrust.

  2. Tunable, Flexible and Efficient Optimization of Control Pulses for Superconducting Qubits, part II - Applications

    NASA Astrophysics Data System (ADS)

    AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.

  3. Optimization of the Determination Method for Dissolved Cyanobacterial Toxin BMAA in Natural Water.

    PubMed

    Yan, Boyin; Liu, Zhiquan; Huang, Rui; Xu, Yongpeng; Liu, Dongmei; Lin, Tsair-Fuh; Cui, Fuyi

    2017-10-17

    There is a serious dispute on the existence of β-N-methylamino-l-alanine (BMAA) in water, which is a neurotoxin that may cause amyotrophic lateral sclerosis/Parkinson's disease (ALS/PDC) and Alzheimer' disease. It is believed that a reliable and sensitive analytical method for the determination of BMAA is urgently required to resolve this dispute. In the present study, the solid phase extraction (SPE) procedure and the analytical method for dissolved BMAA in water were investigated and optimized. The results showed both derivatized and underivatized methods were qualified for the measurement of BMAA and its isomer in natural water, and the limit of detection and the precision of the two methods were comparable. Cartridge characteristics and SPE conditions could greatly affect the SPE performance, and the competition of natural organic matter is the primary factor causing the low recovery of BMAA, which was reduced from approximately 90% in pure water to 38.11% in natural water. The optimized SPE method for BMAA was a combination of rinsed SPE cartridges, controlled loading/elution rates and elution solution, evaporation at 55 °C, reconstitution of a solution mixture, and filtration by polyvinylidene fluoride membrane. This optimized method achieved > 88% recovery of BMAA in both algal solution and river water. The developed method can provide an efficient way to evaluate the actual concentration levels of BMAA in actual water environments and drinking water systems.

  4. Can neutral analytes be concentrated by transient isotachophoresis in micellar electrokinetic chromatography and how much?

    PubMed

    Matczuk, Magdalena; Foteeva, Lidia S; Jarosz, Maciej; Galanski, Markus; Keppler, Bernhard K; Hirokawa, Takeshi; Timerbaev, Andrei R

    2014-06-06

    Transient isotachophoresis (tITP) is a versatile sample preconcentration technique that uses ITP to focus electrically charged analytes at the initial stage of CE analysis. However, according to the ruling principle of tITP, uncharged analytes are beyond its capacity while being separated and detected by micellar electrokinetic chromatography (MEKC). On the other hand, when these are charged micelles that undergo the tITP focusing, one can anticipate the concentration effect, resulting from the formation of transient micellar stack at moving sample/background electrolyte (BGE) boundary, which increasingly accumulates the analytes. This work expands the enrichment potential of tITP for MEKC by demonstrating the quantitative analysis of uncharged metal-based drugs from highly saline samples and introducing to the BGE solution anionic surfactants and buffer (terminating) co-ions of different mobility and concentration to optimize performance. Metallodrugs of assorted lipophilicity were chosen so as to explore whether their varying affinity toward micelles plays the role. In addition to altering the sample and BGE composition, optimization of the detection capability was achieved due to fine-tuning operational variables such as sample volume, separation voltage and pressure, etc. The results of optimization trials shed light on the mechanism of micellar tITP and render effective determination of selected drugs in human urine, with practical limits of detection using conventional UV detector. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Optimization of analytical parameters for inferring relationships among Escherichia coli isolates from repetitive-element PCR by maximizing correspondence with multilocus sequence typing data.

    PubMed

    Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S

    2006-09-01

    Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.

  6. Qualitative and quantitative measurement of cannabinoids in cannabis using modified HPLC/DAD method.

    PubMed

    Patel, Bhupendra; Wene, Daniel; Fan, Zhihua Tina

    2017-11-30

    This study presents an accurate and high throughput method for the quantitative determination of various cannabinoids in cannabis plant material using high pressure liquid chromatography (HPLC) with a diode array detector (DAD). Sample extraction and chromatographic analysis conditions for the measurement of cannabinoids in the complex cannabis plant material matrix were optimized. The Agilent Poroshell 120 SB-C18 column provided high resolution for all target analytes with a short run time (10minutes) given the core shell technology. The aqueous buffer mobile phase was optimized with ammonium acetate at pH 4.75. The change in the mobile phase and the new column ensured a separation between cannabidiol (CBD and cannabigerol (CBG) along with cannabigerol and tetrahydrocannabinolic acid (THCA), which were not well separated by previous publications, improved buffering capacity, and provided analytical performance stability. Moreover, baseline drifting was significantly minimized by the use of a low concentration buffer solution (25mM ammonium acetate). In addition, evaporation and reconstitution of the sample residue with a methanol-organic pure (OP) water solution (65:35) significantly reduced the matrix interference. The modified extraction produced good recoveries (>91%) for each of the eight cannabinoids. The optimized method was validated for specificity, linearity, sensitivity, precision, accuracy, and stability. The combined relative standard deviation (%RSD) for intra-day and inter-day precision for all eight analytes varied from 2.5% to 5.2% and 0.28% to 5.5%, respectively. The %RSD for the repeatability study varied from 1.1% to 5.5%. The recoveries from spiked cannabis matrix samples were greater than 90% for all analytes, except delta-8-tetrahydrocannabinol (Δ 8 -THC), which was 80%. The recoveries varied from 81% to 107% with a precision of 0.7-8.1%RSD. Delta-9-tetrahydrocannabinol (Δ 9 -THC) in all of the cannabis samples (n=635) was less than 10%, which is in compliance with the NJ Medicinal Marijuana regulation. Analysis of samples from two cultivars, which included ten individual samples, four composite samples, seven calibration standards, and four quality control standards, can be performed within 24hours by this high throughput method. Published by Elsevier B.V.

  7. A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH

    PubMed Central

    CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG

    2013-01-01

    We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536

  8. Sample distribution in peak mode isotachophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less

  9. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of Big Data analytics in healthcare. This is because, the usability studies have considered only qualitative approach which describes potential benefits but does not take into account the quantitative study. Also, majority of the studies were from developed countries which brings out the need for promotion of research on Healthcare Big Data analytics in developing countries. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Improvement of LOD in Fluorescence Detection with Spectrally Nonuniform Background by Optimization of Emission Filtering.

    PubMed

    Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N

    2017-10-17

    The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.

  11. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    NASA Astrophysics Data System (ADS)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  12. Optimization of analytical and pre-analytical conditions for MALDI-TOF-MS human urine protein profiles.

    PubMed

    Calvano, C D; Aresta, A; Iacovone, M; De Benedetto, G E; Zambonin, C G; Battaglia, M; Ditonno, P; Rutigliano, M; Bettocchi, C

    2010-03-11

    Protein analysis in biological fluids, such as urine, by means of mass spectrometry (MS) still suffers for insufficient standardization in protocols for sample collection, storage and preparation. In this work, the influence of these variables on healthy donors human urine protein profiling performed by matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was studied. A screening of various urine sample pre-treatment procedures and different sample deposition approaches on the MALDI target was performed. The influence of urine samples storage time and temperature on spectral profiles was evaluated by means of principal component analysis (PCA). The whole optimized procedure was eventually applied to the MALDI-TOF-MS analysis of human urine samples taken from prostate cancer patients. The best results in terms of detected ions number and abundance in the MS spectra were obtained by using home-made microcolumns packed with hydrophilic-lipophilic balance (HLB) resin as sample pre-treatment method; this procedure was also less expensive and suitable for high throughput analyses. Afterwards, the spin coating approach for sample deposition on the MALDI target plate was optimized, obtaining homogenous and reproducible spots. Then, PCA indicated that low storage temperatures of acidified and centrifuged samples, together with short handling time, allowed to obtain reproducible profiles without artifacts contribution due to experimental conditions. Finally, interesting differences were found by comparing the MALDI-TOF-MS protein profiles of pooled urine samples of healthy donors and prostate cancer patients. The results showed that analytical and pre-analytical variables are crucial for the success of urine analysis, to obtain meaningful and reproducible data, even if the intra-patient variability is very difficult to avoid. It has been proven how pooled urine samples can be an interesting way to make easier the comparison between healthy and pathological samples and to individuate possible differences in the protein expression between the two sets of samples. Copyright 2009 Elsevier B.V. All rights reserved.

  13. Optimizing use of the structural chemical analyser (variable pressure FESEM-EDX Raman spectroscopy) on micro-size complex historical paintings characterization.

    PubMed

    Guerra, I; Cardell, C

    2015-10-01

    The novel Structural Chemical Analyser (hyphenated Raman spectroscopy and scanning electron microscopy equipped with an X-ray detector) is gaining popularity since it allows 3-D morphological studies and elemental, molecular, structural and electronic analyses of a single complex micro-sized sample without transfer between instruments. However, its full potential remains unexploited in painting heritage where simultaneous identification of inorganic and organic materials in paintings is critically yet unresolved. Despite benefits and drawbacks shown in literature, new challenges have to be faced analysing multifaceted paint specimens. SEM-Structural Chemical Analyser systems differ since they are fabricated ad hoc by request. As configuration influences the procedure to optimize analyses, likewise analytical protocols have to be designed ad hoc. This paper deals with the optimization of the analytical procedure of a Variable Pressure Field Emission scanning electron microscopy equipped with an X-ray detector Raman spectroscopy system to analyse historical paint samples. We address essential parameters, technical challenges and limitations raised from analysing paint stratigraphies, archaeological samples and loose pigments. We show that accurate data interpretation requires comprehensive knowledge of factors affecting Raman spectra. We tackled: (i) the in-FESEM-Raman spectroscopy analytical sequence, (ii) correlations between FESEM and Structural Chemical Analyser/laser analytical position, (iii) Raman signal intensity under different VP-FESEM vacuum modes, (iv) carbon deposition on samples under FESEM low-vacuum mode, (v) crystal nature and morphology, (vi) depth of focus and (vii) surface-enhanced Raman scattering effect. We recommend careful planning of analysis strategies prior to research which, although time consuming, guarantees reliable results. The ultimate goal of this paper is to help to guide future users of a FESEM-Structural Chemical Analyser system in order to increase applications. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  14. Novel and sensitive reversed-phase high-pressure liquid chromatography method with electrochemical detection for the simultaneous and fast determination of eight biogenic amines and metabolites in human brain tissue.

    PubMed

    Van Dam, Debby; Vermeiren, Yannick; Aerts, Tony; De Deyn, Peter Paul

    2014-08-01

    A fast and simple RP-HPLC method with electrochemical detection (ECD) and ion pair chromatography was developed, optimized and validated in order to simultaneously determine eight different biogenic amines and metabolites in post-mortem human brain tissue in a single-run analytical approach. The compounds of interest are the indolamine serotonin (5-hydroxytryptamine, 5-HT), the catecholamines dopamine (DA) and (nor)epinephrine ((N)E), as well as their respective metabolites, i.e. 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA), 5-hydroxy-3-indoleacetic acid (5-HIAA) and 3-methoxy-4-hydroxyphenylglycol (MHPG). A two-level fractional factorial experimental design was applied to study the effect of five experimental factors (i.e. the ion-pair counter concentration, the level of organic modifier, the pH of the mobile phase, the temperature of the column, and the voltage setting of the detector) on the chromatographic behaviour. The cross effect between the five quantitative factors and the capacity and separation factors of the analytes were then analysed using a Standard Least Squares model. The optimized method was fully validated according to the requirements of SFSTP (Société Française des Sciences et Techniques Pharmaceutiques). Our human brain tissue sample preparation procedure is straightforward and relatively short, which allows samples to be loaded onto the HPLC system within approximately 4h. Additionally, a high sample throughput was achieved after optimization due to a total runtime of maximally 40min per sample. The conditions and settings of the HPLC system were found to be accurate with high intra and inter-assay repeatability, recovery and accuracy rates. The robust analytical method results in very low detection limits and good separation for all of the eight biogenic amines and metabolites in this complex mixture of biological analytes. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  16. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  17. Intuitionistic fuzzy analytical hierarchical processes for selecting the paradigms of mangroves in municipal wastewater treatment.

    PubMed

    Ouyang, Xiaoguang; Guo, Fen

    2018-04-01

    Municipal wastewater discharge is widespread and one of the sources of coastal eutrophication, and is especially uncontrolled in developing and undeveloped coastal regions. Mangrove forests are natural filters of pollutants in wastewater. There are three paradigms of mangroves for municipal wastewater treatment and the selection of the optimal one is a multi-criteria decision-making problem. Combining intuitionistic fuzzy theory, the Fuzzy Delphi Method and the fuzzy analytical hierarchical process (AHP), this study develops an intuitionistic fuzzy AHP (IFAHP) method. For the Fuzzy Delphi Method, the judgments of experts and representatives on criterion weights are made by linguistic variables and quantified by intuitionistic fuzzy theory, which is also used to weight the importance of experts and representatives. This process generates the entropy weights of criteria, which are combined with indices values and weights to rank the alternatives by the fuzzy AHP method. The IFAHP method was used to select the optimal paradigm of mangroves for treating municipal wastewater. The entropy weights were entrained by the valid evaluation of 64 experts and representatives via online survey. Natural mangroves were found to be the optimal paradigm for municipal wastewater treatment. By assigning different weights to the criteria, sensitivity analysis shows that natural mangroves remain to be the optimal paradigm under most scenarios. This study stresses the importance of mangroves for wastewater treatment. Decision-makers need to contemplate mangrove reforestation projects, especially where mangroves are highly deforested but wastewater discharge is uncontrolled. The IFAHP method is expected to be applied in other multi-criteria decision-making cases. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845

  19. Use of experimental design in the investigation of stir bar sorptive extraction followed by ultra-high-performance liquid chromatography-tandem mass spectrometry for the analysis of explosives in water samples.

    PubMed

    Schramm, Sébastien; Vailhen, Dominique; Bridoux, Maxime Cyril

    2016-02-12

    A method for the sensitive quantification of trace amounts of organic explosives in water samples was developed by using stir bar sorptive extraction (SBSE) followed by liquid desorption and ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The proposed method was developed and optimized using a statistical design of experiment approach. Use of experimental designs allowed a complete study of 10 factors and 8 analytes including nitro-aromatics, amino-nitro-aromatics and nitric esters. The liquid desorption study was performed using a full factorial experimental design followed by a kinetic study. Four different variables were tested here: the liquid desorption mode (stirring or sonication), the chemical nature of the stir bar (PDMS or PDMS-PEG), the composition of the liquid desorption phase and finally, the volume of solvent used for the liquid desorption. On the other hand, the SBSE extraction study was performed using a Doehlert design. SBSE extraction conditions such as extraction time profiles, sample volume, modifier addition, and acetic acid addition were examined. After optimization of the experimental parameters, sensitivity was improved by a factor 5-30, depending on the compound studied, due to the enrichment factors reached using the SBSE method. Limits of detection were in the ng/L level for all analytes studied. Reproducibility of the extraction with different stir bars was close to the reproducibility of the analytical method (RSD between 4 and 16%). Extractions in various water sample matrices (spring, mineral and underground water) have shown similar enrichment compared to ultrapure water, revealing very low matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  1. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  2. Reversed-phase single drop microextraction followed by high-performance liquid chromatography with fluorescence detection for the quantification of synthetic phenolic antioxidants in edible oil samples.

    PubMed

    Farajmand, Bahman; Esteki, Mahnaz; Koohpour, Elham; Salmani, Vahid

    2017-04-01

    The reversed-phase mode of single drop microextraction has been used as a preparation method for the extraction of some phenolic antioxidants from edible oil samples. Butylated hydroxyl anisole, tert-butylhydroquinone and butylated hydroxytoluene were employed as target compounds for this study. High-performance liquid chromatography followed by fluorescence detection was applied for final determination of target compounds. The most interesting feature of this study is the application of a disposable insulin syringe with some modification for microextraction procedure that efficiently improved the volume and stability of the solvent microdrop. Different parameters such as the type and volume of solvent, sample stirring rate, extraction temperature, and time were investigated and optimized. Analytical performances of the method were evaluated under optimized conditions. Under the optimal conditions, relative standard deviations were between 4.4 and 10.2%. Linear dynamic ranges were 20-10 000 to 2-1000 μg/g (depending on the analytes). Detection limits were 5-670 ng/g. Finally, the proposed method was successfully used for quantification of the antioxidants in some edible oil samples prepared from market. Relative recoveries were achieved from 88 to 111%. The proposed method had a simplicity of operation, low cost, and successful application for real samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Maximizing the U.S. Army’s Future Contribution to Global Security Using the Capability Portfolio Analysis Tool (CPAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.

    We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less

  4. Thermal gravitational separation of ternary mixture n-dodecane/isobutylbenzene/tetralin components in a porous medium

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader

    2016-06-01

    Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.

  5. Maximizing the U.S. Army’s Future Contribution to Global Security Using the Capability Portfolio Analysis Tool (CPAT)

    DOE PAGES

    Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.; ...

    2016-02-01

    We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less

  6. Design and optimization of a nanoprobe comprising amphiphilic chitosan colloids and Au-nanorods: Sensitive detection of human serum albumin in simulated urine

    NASA Astrophysics Data System (ADS)

    Jean, Ren-Der; Larsson, Mikael; Cheng, Wei-Da; Hsu, Yu-Yuan; Bow, Jong-Shing; Liu, Dean-Mo

    2016-12-01

    Metallic nanoparticles have been utilized as analytical tools to detect a wide range of organic analytes. In most reports, gold (Au)-based nanosensors have been modified with ligands to introduce selectivity towards a specific target molecule. However, in a recent study a new concept was presented where bare Au-nanorods on self-assembled carboxymethyl-hexanoyl chitosan (CHC) nanocarriers achieved sensitive and selective detection of human serum albumin (HSA) after manipulation of the solution pH. Here this concept was further advanced through optimization of the ratio between Au-nanorods and CHC nanocarriers to create a nanotechnology-based sensor (termed CHC-AuNR nanoprobe) with an outstanding lower detection limit (LDL) for HSA. The CHC-AuNR nanoprobe was evaluated in simulated urine solution and a LDL as low as 1.5 pM was achieved at an estimated AuNR/CHC ratio of 2. Elemental mapping and protein adsorption kinetics over three orders of magnitude in HSA concentration confirmed accumulation of HSA on the nanorods and revealed the adsorption to be completed within 15 min for all investigated concentrations. The results suggest that the CHC-AuNR nanoprobe has potential to be utilized for cost-effective detection of analytes in complex liquids.

  7. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  8. Importance of optimizing chromatographic conditions and mass spectrometric parameters for supercritical fluid chromatography/mass spectrometry.

    PubMed

    Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi

    2017-07-28

    Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. On-matrix derivatization extraction of chemical weapons convention relevant alcohols from soil.

    PubMed

    Chinthakindi, Sridhar; Purohit, Ajay; Singh, Varoon; Dubey, D K; Pardasani, Deepak

    2013-10-11

    Present study deals with the on-matrix derivatization-extraction of aminoalcohols and thiodiglycols, which are important precursors and/or degradation products of VX analogues and vesicants class of chemical warfare agents (CWAs). The method involved hexamethyldisilazane (HMDS) mediated in situ silylation of analytes on the soil. Subsequent extraction and gas chromatography-mass spectrometry analysis of derivatized analytes offered better recoveries in comparison to the procedure recommended by the Organization for the Prohibition of Chemical Weapons (OPCW). Various experimental conditions such as extraction solvent, reagent and catalyst amount, reaction time and temperature were optimized. Best recoveries of analytes ranging from 45% to 103% were obtained with DCM solvent containing 5%, v/v HMDS and 0.01%, w/v iodine as catalyst. The limits of detection (LOD) and limit of quantification (LOQ) with selected analytes ranged from 8 to 277 and 21 to 665ngmL(-1), respectively, in selected ion monitoring mode. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Improving Sample Distribution Homogeneity in Three-Dimensional Microfluidic Paper-Based Analytical Devices by Rational Device Design.

    PubMed

    Morbioli, Giorgio Gianini; Mazzu-Nascimento, Thiago; Milan, Luis Aparecido; Stockton, Amanda M; Carrilho, Emanuel

    2017-05-02

    Paper-based devices are a portable, user-friendly, and affordable technology that is one of the best analytical tools for inexpensive diagnostic devices. Three-dimensional microfluidic paper-based analytical devices (3D-μPADs) are an evolution of single layer devices and they permit effective sample dispersion, individual layer treatment, and multiplex analytical assays. Here, we present the rational design of a wax-printed 3D-μPAD that enables more homogeneous permeation of fluids along the cellulose matrix than other existing designs in the literature. Moreover, we show the importance of the rational design of channels on these devices using glucose oxidase, peroxidase, and 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) reactions. We present an alternative method for layer stacking using a magnetic apparatus, which facilitates fluidic dispersion and improves the reproducibility of tests performed on 3D-μPADs. We also provide the optimized designs for printing, facilitating further studies using 3D-μPADs.

  11. Directed transport by surface chemical potential gradients for enhancing analyte collection in nanoscale sensors.

    PubMed

    Sitt, Amit; Hess, Henry

    2015-05-13

    Nanoscale detectors hold great promise for single molecule detection and the analysis of small volumes of dilute samples. However, the probability of an analyte reaching the nanosensor in a dilute solution is extremely low due to the sensor's small size. Here, we examine the use of a chemical potential gradient along a surface to accelerate analyte capture by nanoscale sensors. Utilizing a simple model for transport induced by surface binding energy gradients, we study the effect of the gradient on the efficiency of collecting nanoparticles and single and double stranded DNA. The results indicate that chemical potential gradients along a surface can lead to an acceleration of analyte capture by several orders of magnitude compared to direct collection from the solution. The improvement in collection is limited to a relatively narrow window of gradient slopes, and its extent strongly depends on the size of the gradient patch. Our model allows the optimization of gradient layouts and sheds light on the fundamental characteristics of chemical potential gradient induced transport.

  12. Systematic Development and Validation of a Thin-Layer Densitometric Bioanalytical Method for Estimation of Mangiferin Employing Analytical Quality by Design (AQbD) Approach.

    PubMed

    Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O P; Singh, Bhupinder

    2016-01-01

    The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett-Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm withRf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50-800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  14. TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Gordon, J; Chetty, I

    2014-06-15

    Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less

  15. Analytical models integrated with satellite images for optimized pest management

    USDA-ARS?s Scientific Manuscript database

    The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...

  16. Optimized protocol for quantitative multiple reaction monitoring-based proteomic analysis of formalin-fixed, paraffin embedded tissues

    PubMed Central

    Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.

    2016-01-01

    Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933

  17. What REALLY Works: Optimizing Classroom Discussions to Promote Comprehension and Critical-Analytic Thinking

    ERIC Educational Resources Information Center

    Murphy, P. Karen; Firetto, Carla M.; Wei, Liwei; Li, Mengyi; Croninger, Rachel M. V.

    2016-01-01

    Many American students struggle to perform even basic comprehension of text, such as locating information, determining the main idea, or supporting details of a story. Even more students are inadequately prepared to complete more complex tasks, such as critically or analytically interpreting information in text or making reasoned decisions from…

  18. The Skinny on Big Data in Education: Learning Analytics Simplified

    ERIC Educational Resources Information Center

    Reyes, Jacqueleen A.

    2015-01-01

    This paper examines the current state of learning analytics (LA), its stakeholders and the benefits and challenges these stakeholders face. LA is a field of research that involves the gathering, analyzing and reporting of data related to learners and their environments with the purpose of optimizing the learning experience. Stakeholders in LA are…

  19. The Challenge of Developing a Universal Case Conceptualization for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Bonow, Jordan T.; Maragakis, Alexandros; Follette, William C.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) targets a client's interpersonal behavior for change with the goal of improving his or her quality of life. One question guiding FAP case conceptualization is, "What interpersonal behavioral repertoires will allow a specific client to function optimally?" Previous FAP writings have suggested that a therapist…

  20. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  1. Optimization of two-dimensional gas chromatography time-of-flight mass spectrometry for separation and estimation of the residues of 160 pesticides and 25 persistent organic pollutants in grape and wine.

    PubMed

    Dasgupta, Soma; Banerjee, Kaushik; Patil, Sangram H; Ghaste, Manoj; Dhumal, K N; Adsule, Pandurang G

    2010-06-11

    Two-dimensional gas chromatography (GCxGC) coupled with time-of-flight mass spectrometric (TOFMS) method was optimized for simultaneous analysis of 160 pesticides, 12 dioxin-like polychlorinated biphenyls (PCBs), 12 polyaromatic hydrocarbons (PAHs) and bisphenol A in grape and wine. GCxGC-TOFMS could separate all the 185 analytes within 38min with >85% NIST library-based mass spectral confirmations. The matrix effect quantified as the ratio of the slope of matrix-matched to solvent calibrations was within 0.5-1.5 for most analytes. LOQ of most of the analytes was < or =10microg/L with nine exceptions having LOQs of 12.5-25microg/L. Recoveries ranged between 70 and 120% with <20% expanded uncertainties for 151 and 148 compounds in grape and wine, respectively, with intra-laboratory Horwitz ratio <0.2 for all analytes. The method was evaluated in the incurred grape samples where residues of cypermethrin, permethrin, chlorpyriphos, metalaxyl and etophenprox were detected at below MRL. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Optimum design of structures subject to general periodic loads

    NASA Technical Reports Server (NTRS)

    Reiss, Robert; Qian, B.

    1989-01-01

    A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.

  3. Optimal Design of a Thermoelectric Cooling/Heating System for Car Seat Climate Control (CSCC)

    NASA Astrophysics Data System (ADS)

    Elarusi, Abdulmunaem; Attar, Alaa; Lee, Hosung

    2017-04-01

    In the present work, the optimum design of thermoelectric car seat climate control (CSCC) is studied analytically in an attempt to achieve high system efficiency. Optimal design of a thermoelectric device (element length, cross-section area and number of thermocouples) is carried out using our newly developed optimization method based on the ideal thermoelectric equations and dimensional analysis to improve the performance of the thermoelectric device in terms of the heating/cooling power and the coefficient of performance (COP). Then, a new innovative system design is introduced which also includes the optimum input current for the initial (transient) startup warming and cooling before the car heating ventilation and air conditioner (HVAC) is active in the cabin. The air-to-air heat exchanger's configuration was taken into account to investigate the optimal design of the CSCC.

  4. Cooling Panel Optimization for the Active Cooling System of a Hypersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Youn, B.; Mills, A. F.

    1995-01-01

    Optimization of cooling panels for an active cooling system of a hypersonic aircraft is explored. The flow passages are of rectangular cross section with one wall heated. An analytical fin-type model for incompressible flow in smooth-wall rectangular ducts with coupled wall conduction is proposed. Based on this model, the a flow rate of coolant to each design minimum mass flow rate or coolant for a single cooling panel is obtained by satisfying hydrodynamic, thermal, and Mach number constraints. Also, the sensitivity of the optimal mass flow rate of coolant to each design variable is investigated. In addition, numerical solutions for constant property flow in rectangular ducts, with one side rib-roughened and coupled wall conduction, are obtained using a k-epsilon and wall function turbulence model, these results are compared with predictions of the analytical model.

  5. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  6. Primer vector theory applied to the linear relative-motion equations. [for N-impulse space trajectory optimization

    NASA Technical Reports Server (NTRS)

    Jezewski, D.

    1980-01-01

    Prime vector theory is used in analyzing a set of linear relative-motion equations - the Clohessy-Wiltshire (C/W) equations - to determine the criteria and necessary conditions for an optimal N-impulse trajectory. The analysis develops the analytical criteria for improving a solution by: (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of: (1) fixed-end conditions, two impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized renezvous problem.

  7. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  8. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  9. Single-step transesterification with simultaneous concentration and stable isotope analysis of fatty acid methyl esters by gas chromatography-combustion-isotope ratio mass spectrometry.

    PubMed

    Panetta, Robert J; Jahren, A Hope

    2011-05-30

    Gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is increasingly applied to food and metabolic studies for stable isotope analysis (δ(13) C), with the quantification of analyte concentration often obtained via a second alternative method. We describe a rapid direct transesterification of triacylglycerides (TAGs) for fatty acid methyl ester (FAME) analysis by GC-C-IRMS demonstrating robust simultaneous quantification of amount of analyte (mean r(2) =0.99, accuracy ±2% for 37 FAMEs) and δ(13) C (±0.13‰) in a single analytical run. The maximum FAME yield and optimal δ(13) C values are obtained by derivatizing with 10% (v/v) acetyl chloride in methanol for 1 h, while lower levels of acetyl chloride and shorter reaction times skewed the δ(13) C values by as much as 0.80‰. A Bland-Altman evaluation of the GC-C-IRMS measurements resulted in excellent agreement for pure oils (±0.08‰) and oils extracted from French fries (±0.49‰), demonstrating reliable simultaneous quantification of FAME concentration and δ(13) C values. Thus, we conclude that for studies requiring both the quantification of analyte and δ(13) C data, such as authentication or metabolic flux studies, GC-C-IRMS can be used as the sole analytical method. Copyright © 2011 John Wiley & Sons, Ltd.

  10. A Comparative Study of Single-pulse and Double-pulse Laser-Induced Breakdown Spectroscopy with Uranium-containing Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skrodzki, P. J.; Becker, J. R.; Diwakar, P. K.

    Laser-induced breakdown spectroscopy (LIBS) holds potential advantages in special nuclear material (SNM) sensing and nuclear forensics which require rapid analysis, minimal sample preparation and stand-off distance capability. SNM, such as U, however, result in crowded emission spectra with LIBS, and characteristic emission lines are challenging to discern. It is well-known that double-pulse LIBS (DPLIBS) improves the signal intensity for analytes over conventional single-pulse LIBS (SPLIBS). This study investigates U signal in a glass matrix using DPLIBS and compares to signal features obtained using SPLIBS. DPLIBS involves sequential firing of 1.06 µm Nd:YAG pre-pulse and 10.6 µm TEA CO2 heating pulsemore » in near collinear geometry. Optimization of experimental parameters including inter-pulse delay and energy follows identification of characteristic lines and signals for bulk analyte Ca and minor constituent analyte U for both DPLIBS and SPLIBS. Spatial and temporal coupling of the two pulses in the proposed DPLIBS technique yields improvements in analytical merits with negligible further damage to the sample compared to SPLIBS. Subsequently, the study discusses optimum plasma emission conditions of U lines and relative figures of merit in both SPLIBS and DPLIBS. Investigation into plasma characteristics also addresses plausible mechanisms related to observed U analyte signal variation between SPLIBS and DPLIBS.« less

  11. Ionic liquid-based single-drop microextraction followed by liquid chromatography-ultraviolet spectrophotometry detection to determine typical UV filters in surface water samples.

    PubMed

    Vidal, Lorena; Chisvert, Alberto; Canals, Antonio; Salvador, Amparo

    2010-04-15

    A user-friendly and inexpensive ionic liquid-based single-drop microextraction (IL-SDME) procedure has been developed to preconcentrate trace amounts of six typical UV filters extensively used in cosmetic products (i.e., 2-hydroxy-4-methoxybenzophenone, isoamyl 4-methoxycinnamate, 3-(4'-methylbenzylidene)camphor, 2-ethylhexyl 2-cyano-3,3-diphenylacrylate, 2-ethylhexyl 4-dimethylaminobenzoate and 2-ethylhexyl 4-methoxycinnamate) from surface water samples prior to analysis by liquid chromatography-ultraviolet spectrophotometry detection (LC-UV). A two-stage multivariate optimization approach was developed by means of a Plackett-Burman design for screening and selecting the significant variables involved in the SDME procedure, which were later optimized by means of a circumscribed central composite design. The studied variables were drop volume, sample volume, agitation speed, ionic strength, extraction time and ethanol quantity. Owing to particularities, ionic liquid type and pH of the sample were optimized separately. Under optimized experimental conditions (i.e., 10 microL of 1-hexyl-3-methylimidazolium hexafluorophosphate, 20 mL of sample containing 1% (v/v) ethanol and NaCl free adjusted to pH 2, 37 min extraction time and 1300 rpm agitation speed) enrichment factors up to ca. 100-fold were obtained depending on the target analyte. The method gave good levels of repeatability with relative standard deviations varying between 2.8 and 8.8% (n=6). Limits of detection were found in the low microg L(-1) range, varying between 0.06 and 3.0 microg L(-1) depending on the target analyte. Recovery studies from different types of surface water samples collected during the winter period, which were analysed and confirmed free of all target analytes, ranged between 92 and 115%, showing that the matrix had a negligible effect upon extraction. Finally, the proposed method was applied to the analysis of different water samples (taken from two beaches, two swimming pools and a river) collected during the summer period. (c) 2009 Elsevier B.V. All rights reserved.

  12. A Model for Developing Clinical Analytics Capacity: Closing the Loops on Outcomes to Optimize Quality.

    PubMed

    Eggert, Corinne; Moselle, Kenneth; Protti, Denis; Sanders, Dale

    2017-01-01

    Closed Loop Analytics© is receiving growing interest in healthcare as a term referring to information technology, local data and clinical analytics working together to generate evidence for improvement. The Closed Loop Analytics model consists of three loops corresponding to the decision-making levels of an organization and the associated data within each loop - Patients, Protocols, and Populations. The authors propose that each of these levels should utilize the same ecosystem of electronic health record (EHR) and enterprise data warehouse (EDW) enabled data, in a closed-loop fashion, with that data being repackaged and delivered to suit the analytic and decision support needs of each level, in support of better outcomes.

  13. Ruggedness testing and validation of a practical analytical method for > 100 veterinary drug residues in bovine muscle by ultrahigh performance liquid chromatography – tandem mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    In this study, optimization, extension, and validation of a streamlined, qualitative and quantitative multiclass, multiresidue method was conducted to monitor great than100 veterinary drug residues in meat using ultrahigh-performance liquid chromatography – tandem mass spectrometry (UHPLC-MS/MS). I...

  14. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  15. Charles Morris's Semiotic Model and Analytical Studies of Visual and Verbal Representations in Technical Communication

    ERIC Educational Resources Information Center

    Fan, Jiang-Ping

    2006-01-01

    In this article, the author demonstrates that the semiotic model proposed by Charles Morris enables us to optimize our understanding of technical communication practices and provides a good point of inquiry. To illustrate this point, the author exemplifies the semiotic approaches by scholars in technical communication and elaborates Morris's model…

  16. Sensitive screening of abused drugs in dried blood samples using ultra-high-performance liquid chromatography-ion booster-quadrupole time-of-flight mass spectrometry.

    PubMed

    Chepyala, Divyabharathi; Tsai, I-Lin; Liao, Hsiao-Wei; Chen, Guan-Yuan; Chao, Hsi-Chun; Kuo, Ching-Hua

    2017-03-31

    An increased rate of drug abuse is a major social problem worldwide. The dried blood spot (DBS) sampling technique offers many advantages over using urine or whole blood sampling techniques. This study developed a simple and efficient ultra-high-performance liquid chromatography-ion booster-quadrupole time-of-flight mass spectrometry (UHPLC-IB-QTOF-MS) method for the analysis of abused drugs and their metabolites using DBS. Fifty-seven compounds covering the most commonly abused drugs, including amphetamines, opioids, cocaine, benzodiazepines, barbiturates, and many other new and emerging abused drugs, were selected as the target analytes of this study. An 80% acetonitrile solvent with a 5-min extraction by Geno grinder was used for sample extraction. A Poroshell column was used to provide efficient separation, and under optimal conditions, the analytical times were 15 and 5min in positive and negative ionization modes, respectively. Ionization parameters of both electrospray ionization source and ion booster (IB) source containing an extra heated zone were optimized to achieve the best ionization efficiency of the investigated abused drugs. In spite of their structural diversity, most of the abused drugs showed an enhanced mass response with the high temperature ionization from an extra heated zone of IB source. Compared to electrospray ionization, the ion booster (IB) greatly improved the detection sensitivity for 86% of the analytes by 1.5-14-fold and allowed the developed method to detect trace amounts of compounds on the DBS cards. The validation results showed that the coefficients of variation of intra-day and inter-day precision in terms of the signal intensity were lower than 19.65%. The extraction recovery of all analytes was between 67.21 and 115.14%. The limits of detection of all analytes were between 0.2 and 35.7ngmL -1 . The stability study indicated that 7% of compounds showed poor stability (below 50%) on the DBS cards after 6 months of storage at room temperature and -80°C. The reported method provides a new direction for abused drug screening using DBS. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Charging power optimization for nonlinear vibration energy harvesting systems subjected to arbitrary, persistent base excitations

    NASA Astrophysics Data System (ADS)

    Dai, Quanqi; Harne, Ryan L.

    2018-01-01

    The vibrations of mechanical systems and structures are often a combination of periodic and random motions. Emerging interest to exploit nonlinearities in vibration energy harvesting systems for charging microelectronics may be challenged by such reality due to the potential to transition between favorable and unfavorable dynamic regimes for DC power delivery. Therefore, a need exists to devise an optimization method whereby charging power from nonlinear energy harvesters remains maximized when excitation conditions are neither purely harmonic nor purely random, which have been the attention of past research. This study meets the need by building from an analytical approach that characterizes the dynamic response of nonlinear energy harvesting platforms subjected to combined harmonic and stochastic base accelerations. Here, analytical expressions are formulated and validated to optimize charging power while the influences of the relative proportions of excitation types are concurrently assessed. It is found that about a 2 times deviation in optimal resistive loads can reduce the charging power by 20% when the system is more prominently driven by harmonic base accelerations, whereas a greater proportion of stochastic excitation results in a 11% reduction in power for the same resistance deviation. In addition, the results reveal that when the frequency of a predominantly harmonic excitation deviates by 50% from optimal conditions the charging power reduces by 70%, whereas the same frequency deviation for a more stochastically dominated excitation reduce total DC power by only 20%. These results underscore the need for maximizing direct current power delivery for nonlinear energy harvesting systems in practical operating environments.

  18. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  19. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. The effect of pre-evaporation on ion distributions in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Liu, Shulan; Beauchemin, Diane

    2006-02-01

    The connecting tube (2 or 5-mm i. d., 11-cm long) between the spray chamber and the torch was heated (to 400 °C) to investigate the effect of pre-evaporation on the distribution of ions in inductively coupled plasma mass spectrometry (ICP-MS). Axial and radial profiles of analyte ions (Al +, V +, Cr +, Ni +, Zn +, Mn +, Zn +, As +, Se +, Mo +, Cd +, Sb +, La +, Pb +) in 1% HNO 3 as well as some polyatomic ions (LaO +, ArO +, ArN +, CO 2+) were simultaneously obtained on a time-of-flight ICP-MS instrument. Upon heating the connecting tube, the optimal axial position of all elements shifted closer to the load coil. Without the heated tube, 3.5 mm was the compromise axial position for multielemental analysis, which was optimal for 6 analytes. With the heated tube, this position became 1.5 mm, which was then optimal for 9 of the 14 analytes. Furthermore, the radial profiles, which were wide with a plateau in their middle without heating, became significantly narrower and Gaussian-like with a heated tube. This narrowing, which was most important for the 5-mm tube, slightly (by a factor of two at the most) yet significantly (at the 95% confidence level) improved the sensitivity of all elements but Mn upon optimisation of the axial position for compromise multi-element analysis. Furthermore, a concurrent decrease in the standard deviation of the blank was significant at the 95% confidence level for 9 of the 14 analytes. For most of the analytes, this translated into a two-fold to up to an order of magnitude improvement in detection limit, which is commensurate with a reduction of noise resulting from the smaller droplets entering the plasma after traversing the pre-evaporation tube.

  1. Scalable Rapidly Deployable Convex Optimization for Data Analytics

    DTIC Science & Technology

    SOCPs , SDPs, exponential cone programs, and power cone programs. CVXPY supports basic methods for distributed optimization, on...multiple heterogenous platforms. We have also done basic research in various application areas , using CVXPY , to demonstrate its usefulness. See attached report for publication information....Over the period of the contract we have developed the full stack for wide use of convex optimization, in machine learning and many other areas .

  2. Service Bundle Recommendation for Person-Centered Care Planning in Cities.

    PubMed

    Kotoulas, Spyros; Daly, Elizabeth; Tommasi, Pierpaolo; Kishimoto, Akihiro; Lopez, Vanessa; Stephenson, Martin; Botea, Adi; Sbodio, Marco; Marinescu, Radu; Rooney, Ronan

    2016-01-01

    Providing appropriate support for the most vulnerable individuals carries enormous societal significance and economic burden. Yet, finding the right balance between costs, estimated effectiveness and the experience of the care recipient is a daunting task that requires considering vast amount of information. We present a system that helps care teams choose the optimal combination of providers for a set of services. We draw from techniques in Open Data processing, semantic processing, faceted exploration, visual analytics, transportation analytics and multi-objective optimization. We present an implementation of the system using data from New York City and illustrate the feasibility these technologies to guide care workers in care planning.

  3. Mass Spectrometry Parameters Optimization for the 46 Multiclass Pesticides Determination in Strawberries with Gas Chromatography Ion-Trap Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Fernandes, Virgínia C.; Vera, Jose L.; Domingues, Valentina F.; Silva, Luís M. S.; Mateus, Nuno; Delerue-Matos, Cristina

    2012-12-01

    Multiclass analysis method was optimized in order to analyze pesticides traces by gas chromatography with ion-trap and tandem mass spectrometry (GC-MS/MS). The influence of some analytical parameters on pesticide signal response was explored. Five ion trap mass spectrometry (IT-MS) operating parameters, including isolation time (IT), excitation voltage (EV), excitation time (ET), maximum excitation energy or " q" value (q), and isolation mass window (IMW) were numerically tested in order to maximize the instrument analytical signal response. For this, multiple linear regression was used in data analysis to evaluate the influence of the five parameters on the analytical response in the ion trap mass spectrometer and to predict its response. The assessment of the five parameters based on the regression equations substantially increased the sensitivity of IT-MS/MS in the MS/MS mode. The results obtained show that for most of the pesticides, these parameters have a strong influence on both signal response and detection limit. Using the optimized method, a multiclass pesticide analysis was performed for 46 pesticides in a strawberry matrix. Levels higher than the limit established for strawberries by the European Union were found in some samples.

  4. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  5. Optimal insecticide-treated bed-net coverage and malaria treatment in a malaria-HIV co-infection model.

    PubMed

    Mohammed-Awel, Jemal; Numfor, Eric

    2017-03-01

    We propose and study a mathematical model for malaria-HIV co-infection transmission and control, in which malaria treatment and insecticide-treated nets are incorporated. The existence of a backward bifurcation is established analytically, and the occurrence of such backward bifurcation is influenced by disease-induced mortality, insecticide-treated bed-net coverage and malaria treatment parameters. To further assess the impact of malaria treatment and insecticide-treated bed-net coverage, we formulate an optimal control problem with malaria treatment and insecticide-treated nets as control functions. Using reasonable parameter values, numerical simulations of the optimal control suggest the possibility of eliminating malaria and reducing HIV prevalence significantly, within a short time horizon.

  6. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  7. Andy Walker | NREL

    Science.gov Websites

    efficiency and renewable energy projects. His patent on the Renewable Energy Optimization (REO) method of distribution function for time-series simulation Analytical and numerical optimization Project delivery with System Operations and Maintenance: 2nd Edition, 2016, NREL/Sandia/Sunspec Alliance SuNLaMP PV O&M

  8. Shape Optimization of Cylindrical Shell for Interior Noise

    NASA Technical Reports Server (NTRS)

    Robinson, Jay H.

    1999-01-01

    In this paper an analytic method is used to solve for the cross spectral density of the interior acoustic response of a cylinder with nonuniform thickness subjected to turbulent boundary layer excitation. The cylinder is of honeycomb core construction with the thickness of the core material expressed as a cosine series in the circumferential direction. The coefficients of this series are used as the design variable in the optimization study. The objective function is the space and frequency averaged acoustic response. Results confirm the presence of multiple local minima as previously reported and demonstrate the potential for modest noise reduction.

  9. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  10. Analytical approximation and numerical simulations for periodic travelling water waves

    NASA Astrophysics Data System (ADS)

    Kalimeris, Konstantinos

    2017-12-01

    We present recent analytical and numerical results for two-dimensional periodic travelling water waves with constant vorticity. The analytical approach is based on novel asymptotic expansions. We obtain numerical results in two different ways: the first is based on the solution of a constrained optimization problem, and the second is realized as a numerical continuation algorithm. Both methods are applied on some examples of non-constant vorticity. This article is part of the theme issue 'Nonlinear water waves'.

  11. Retail video analytics: an overview and survey

    NASA Astrophysics Data System (ADS)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  12. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  13. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  14. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  15. Analytic study of orbiter landing profiles

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1981-01-01

    A broad survey of possible orbiter landing configurations was made with specific goals of defining boundaries for the landing task. The results suggest that the center of the corridors between marginal and routine represents a more or less optimal preflare condition for regular operations. Various constraints used to define the boundaries are based largely on qualitative judgements from earlier flight experience with the X-15 and lifting body research aircraft. The results should serve as useful background for expanding and validating landing simulation programs. The analytic approach offers a particular advantage in identifying trends due to the systematic variation of factors such as vehicle weight, load factor, approach speed, and aim point. Limitations such as a constant load factor during the flare and using a fixed gear deployment time interval, can be removed by increasing the flexibility of the computer program. This analytic definition of landing profiles of the orbiter may suggest additional studies, includin more configurations or more comparisons of landing profiles within and beyond the corridor boundaries.

  16. Factors affecting the technical efficiency of general hospitals in Iran: data envelopment analysis.

    PubMed

    Kalhor, Rohollah; Amini, Saeed; Sokhanvar, Mobin; Lotfi, Farhad; Sharifi, Marziye; Kakemam, Edris

    2016-03-01

    Restrictions on resource accessibility and its optimal application is the main challenge in organizations nowadays. The aim of this research was to study the technical efficiency and its related factors in Tehran general hospitals. This descriptive analytical study was conducted retrospectively in 2014. Fifty-four hospitals with private, university, and social security ownerships from the total 110 general hospitals were randomly selected for inclusion into this study on the basis of the share of ownership. Data were collected using a checklist with three sections, including background variables, inputs, and outputs. Seventeen (31.48%) hospitals had an efficiency score of 1 (highest efficiency score). The highest average efficiency score was in social security hospitals (84.32). Private and university hospitals ranked next with an average of 84.29 and 79.64, respectively. Analytical results showed that there was a significant relationship between hospital ownership, hospital type in terms of duty and specialization, educational field of the chief executive officer, and technical efficiency. There was no significant relationship between education level of hospital manager and technical efficiency. Most of the studied hospitals were operating at low efficiency. Therefore, policymakers should plan to improve the hospital operations and promote hospitals to an optimal level of efficiency.

  17. Turbulent flow chromatography TFC-tandem mass spectrometry supporting in vitro/vivo studies of NCEs in high throughput fashion.

    PubMed

    Verdirame, Maria; Veneziano, Maria; Alfieri, Anna; Di Marco, Annalise; Monteagudo, Edith; Bonelli, Fabio

    2010-03-11

    Turbulent Flow Chromatography (TFC) is a powerful approach for on-line extraction in bioanalytical studies. It improves sensitivity and reduces sample preparation time, two factors that are of primary importance in drug discovery. In this paper the application of the ARIA system to the analytical support of in vivo pharmacokinetics (PK) and in vitro drug metabolism studies is described, with an emphasis in high throughput optimization. For PK studies, a comparison between acetonitrile plasma protein precipitation (APPP) and TFC was carried out. Our optimized TFC methodology gave better S/N ratios and lower limit of quantification (LOQ) than conventional procedures. A robust and high throughput analytical method to support hepatocyte metabolic stability screening of new chemical entities was developed by hyphenation of TFC with mass spectrometry. An in-loop dilution injection procedure was implemented to overcome one of the main issues when using TFC, that is the early elution of hydrophilic compounds that renders low recoveries. A comparison between off-line solid phase extraction (SPE) and TFC was also carried out, and recovery, sensitivity (LOQ), matrix effect and robustness were evaluated. The use of two parallel columns in the configuration of the system provided a further increase of the throughput. Copyright 2009 Elsevier B.V. All rights reserved.

  18. At-line nanofractionation with parallel mass spectrometry and bioactivity assessment for the rapid screening of thrombin and factor Xa inhibitors in snake venoms.

    PubMed

    Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen

    2016-02-01

    Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease bioactivity fingerprints in biological studies on venoms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons.

    PubMed

    Bernardi, Davide; Lindner, Benjamin

    2017-06-30

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  20. The unlikely high efficiency of a molecular motor based on active motion

    NASA Astrophysics Data System (ADS)

    Ebeling, W.

    2015-07-01

    The efficiency of a simple model of a motor converting chemical into mechanical energy is studied analytically. The model motor shows interesting properties corresponding qualitatively to motors investigated in experiments. The efficiency increases with the load and may for low loss reach high values near to 100 percent in a narrow regime of optimal load. It is shown that the optimal load and the maximal efficiency depend by universal power laws on the dimensionless loss parameter. Stochastic effects decrease the stability of motor regimes with high efficiency and make them unlikely. Numerical studies show efficiencies below the theoretical optimum and demonstrate that special ratchet profiles my stabilize efficient regimes.

  1. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    NASA Astrophysics Data System (ADS)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  2. Advances in the Control System for a High Precision Dissolved Organic Carbon Analyzer

    NASA Astrophysics Data System (ADS)

    Liao, M.; Stubbins, A.; Haidekker, M.

    2017-12-01

    Dissolved organic carbon (DOC) is a master variable in aquatic ecosystems. DOC in the ocean is one of the largest carbon stores on earth. Studies of the dynamics of DOC in the ocean and other low DOC systems (e.g. groundwater) are hindered by the lack of high precision (sub-micromolar) analytical techniques. Results are presented from efforts to construct and optimize a flow-through, wet chemical DOC analyzer. This study focused on the design, integration and optimization of high precision components and control systems required for such a system (mass flow controller, syringe pumps, gas extraction, reactor chamber with controlled UV and temperature). Results of the approaches developed are presented.

  3. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    NASA Astrophysics Data System (ADS)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  4. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  6. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    NASA Astrophysics Data System (ADS)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  7. Simultaneous determination of PPCPs, EDCs, and artificial sweeteners in environmental water samples using a single-step SPE coupled with HPLC-MS/MS and isotope dilution.

    PubMed

    Tran, Ngoc Han; Hu, Jiangyong; Ong, Say Leong

    2013-09-15

    A high-throughput method for the simultaneous determination of 24 pharmaceuticals and personal care products (PPCPs), endocrine disrupting chemicals (EDCs) and artificial sweeteners (ASs) was developed. The method was based on a single-step solid phase extraction (SPE) coupled with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) and isotope dilution. In this study, a single-step SPE procedure was optimized for simultaneous extraction of all target analytes. Good recoveries (≥ 70%) were observed for all target analytes when extraction was performed using Chromabond(®) HR-X (500 mg, 6 mL) cartridges under acidic condition (pH 2). HPLC-MS/MS parameters were optimized for the simultaneous analysis of 24 PPCPs, EDCs and ASs in a single injection. Quantification was performed by using 13 isotopically labeled internal standards (ILIS), which allows correcting efficiently the loss of the analytes during SPE procedure, matrix effects during HPLC-MS/MS and fluctuation in MS/MS signal intensity due to instrument. Method quantification limit (MQL) for most of the target analytes was below 10 ng/L in all water samples. The method was successfully applied for the simultaneous determination of PPCPs, EDCs and ASs in raw wastewater, surface water and groundwater samples collected in a local catchment area in Singapore. In conclusion, the developed method provided a valuable tool for investigating the occurrence, behavior, transport, and the fate of PPCPs, EDCs and ASs in the aquatic environment. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Validation of an SPME method, using PDMS, PA, PDMS-DVB, and CW-DVB SPME fiber coatings, for analysis of organophosphorus insecticides in natural waters.

    PubMed

    Lambropoulou, D A; Sakkas, V A; Albanis, T A

    2002-11-01

    Solid-phase microextraction (SPME) has been optimized and applied to the determination of the organophosphorus insecticides diazinon, dichlofenthion, parathion methyl, malathion, fenitrothion, fenthion, parathion ethyl, bromophos methyl, bromophos ethyl, and ethion in natural waters. Four types of SPME fiber coated with different stationary phases (PDMS, PA, PDMS-DVB, and CW-DVB) were used to examine their extraction efficiencies for the compounds tested. Conditions that might affect the SPME procedure, such as extraction time and salt content, were investigated to determine the analytical performance of these fiber coatings for organophosphorus insecticides. The optimized procedure was applied to natural waters - tap, sea, river, and lake water - spiked in the concentration range 0.5 to 50 micro g L(-1) to obtain the analytical characteristics. Recoveries were relatively high - >80% for all types of aqueous sample matrix - and the calibration plots were reproducible and linear (R(2)>0.982) for all analytes with all the fibers tested. The limits of detection ranged from 2 to 90 ng L(-1), depending on the detector and the compound investigated, with relative standard deviations in the range 3-15% at all the concentration levels tested. The SPME partition coefficients (K(f)) of the organophosphorus insecticides were calculated experimentally for all the polymer coatings. The effect of organic matter such as humic acids on extraction efficiency was also studied. The analytical performance of the SPME procedure using all the fibers in the tested natural waters proved effective for the compounds.

  9. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers

    PubMed Central

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-01-01

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the “all parallel” shielding coils with a 45° starting position have the best shielding performance, whereas the “separated loop” shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same. PMID:28587137

  10. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.

    PubMed

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-05-26

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.

  11. Statistical mechanics of influence maximization with thermal noise

    NASA Astrophysics Data System (ADS)

    Lynn, Christopher W.; Lee, Daniel D.

    2017-03-01

    The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.

  12. Graphene oxide assisted electromembrane extraction with gas chromatography for the determination of methamphetamine as a model analyte in hair and urine samples.

    PubMed

    Bagheri, Hasan; Zavareh, Alireza Fakhari; Koruni, Mohammad Hossein

    2016-03-01

    In the present study, graphene oxide reinforced two-phase electromembrane extraction (EME) coupled with gas chromatography was applied for the determination of methamphetamine as a model analyte in biological samples. The presence of graphene oxide in the hollow fiber wall can increase the effective surface area, interactions with analyte and polarity of support liquid membrane that leads to an enhancement in the analyte migration. To investigate the influence of the presence of graphene oxide in the support liquid membrane on the extraction efficiency, a comparative study was performed between graphene oxide and graphene oxide/EME methods. The extraction parameters such as type of organic solvent, pH of the donor phase, stirring speed, time, voltage, salt addition and the concentration of graphene oxide were optimized. Under the optimum conditions, the proposed microextraction technique provided low limit of detection (2.4 ng/mL), high preconcentration factor (195-198) and high relative recovery (95-98.5%). Finally, the method was successfully employed for the determination of methamphetamine in urine and hair samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory

    PubMed Central

    Parpia, Zaheer A.; Kelso, David M.

    2010-01-01

    Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793

  14. Location of Biomarkers and Reagents within Agarose Beads of a Programmable Bio-nano-chip

    PubMed Central

    Jokerst, Jesse V.; Chou, Jie; Camp, James P.; Wong, Jorge; Lennart, Alexis; Pollard, Amanda A.; Floriano, Pierre N.; Christodoulides, Nicolaos; Simmons, Glennon W.; Zhou, Yanjie; Ali, Mehnaaz F.

    2012-01-01

    The slow development of cost-effective medical microdevices with strong analytical performance characteristics is due to a lack of selective and efficient analyte capture and signaling. The recently developed programmable bio-nano-chip (PBNC) is a flexible detection device with analytical behavior rivaling established macroscopic methods. The PBNC system employs ≈300 μm-diameter bead sensors composed of agarose “nanonets” that populate a microelectromechanical support structure with integrated microfluidic elements. The beads are an efficient and selective protein-capture medium suitable for the analysis of complex fluid samples. Microscopy and computational studies probe the 3D interior of the beads. The relative contributions that the capture and detection of moieties, analyte size, and bead porosity make to signal distribution and intensity are reported. Agarose pore sizes ranging from 45 to 620 nm are examined and those near 140 nm provide optimal transport characteristics for rapid (<15 min) tests. The system exhibits efficient (99.5%) detection of bead-bound analyte along with low (≈2%) nonspecific immobilization of the detection probe for carcinoembryonic antigen assay. Furthermore, the role analyte dimensions play in signal distribution is explored, and enhanced methods for assay building that consider the unique features of biomarker size are offered. PMID:21290601

  15. Improving Free-Piston Stirling Engine Specific Power

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell Henry

    2014-01-01

    This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.

  16. Improving Free-Piston Stirling Engine Specific Power

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.

    2015-01-01

    This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.

  17. Optimization and characterization of condensation nucleation light scattering detection coupled with supercritical fluid chromatography

    NASA Astrophysics Data System (ADS)

    Yang, Shaoping

    This dissertation is an investigation of two aspects of coupling condensation nucleation light scattering detection (CNLSD) with supercritical fluid chromatography (SFC). In the first part, it was demonstrated that CNLSD was compatible with packed column SFC using either pure CO2 or organic solvent modified CO2 as mobile phases. Factors which were expected to affect the interface between SFC and CNLSD were optimized for the detector to reach low detection limits. With SFC using pure CO2 as mobile phase, the detection limit of CNLSD with SFC was observed to be at low nanogram levels, which was at the same level of flame ionization detection (FID) coupled with SFC. For SFC using modified CO2 as mobile phase, detection limits at the picogram level were observed for CNLSD at optimal conditions, which were at least ten times lower than those reached by evaporative light scattering detection. In the second part, particle size distributions of aerosols produced from rapid expansion of supercritical solutions were measured with a scanning mobility particle sizer. The effect of the factors, which were investigated in the first part for their effects on signal intensities and signal to noise ratios (S/N), on particle size distributions (PSDs) of both analyte and background were investigated. Whenever possible, both particle sizes and particle number obtained from PSDs were used to explain the optimization results. In general, PSD data support the observations made in the first part. The detection limits of CNLSD obtained were much higher than predicted. PSDs did not provide direct explanation of this problem. The amount of analyte deposited in the transport tubing, evaporated to gas phase, and condensed to form particles was determined experimentally. Almost no analyte was found in the gas phase. Less than 3% was found in the particle forms. The vast majority of analyte was lost in the transport tubing, especially in the short distance after supercritical fluid expansion. A mechanism was proposed to explain the loss of analyte in the transport tubing.

  18. Optimizing piezoelectric receivers for acoustic power transfer applications

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2018-07-01

    In this paper, we aim to optimize piezoelectric plate receivers for acoustic power transfer applications by analyzing the influence of the losses and of the acoustic boundary conditions. We derive the analytic expressions of the efficiency of the receiver with the optimal electric loads attached, and analyze the maximum efficiency value and its frequency with different loss and acoustic boundary conditions. To validate the analytical expressions that we have derived, we perform experiments in water with composite transducers of different filling fractions, and see that a lower acoustic impedance mismatch can compensate the influence of large dielectric and acoustic losses to achieve a good performance. Finally, we briefly compare the advantages and drawbacks of composite transducers and pure PZT (lead zirconate titanate) plates as acoustic power receivers, and conclude that 1–3 composites can achieve similar efficiency values in low power applications due to their adjustable acoustic impedance.

  19. Symmetric tridiagonal structure preserving finite element model updating problem for the quadratic model

    NASA Astrophysics Data System (ADS)

    Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath

    2018-07-01

    One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.

  20. Systematic optimization of ethyl glucuronide extraction conditions from scalp hair by design of experiments and its potential effect on cut-off values appraisal.

    PubMed

    Alladio, Eugenio; Biosa, Giulia; Seganti, Fabrizio; Di Corcia, Daniele; Salomone, Alberto; Vincenti, Marco; Baumgartner, Markus R

    2018-05-11

    The quantitative determination of ethyl glucuronide (EtG) in hair samples is consistently used throughout the world to assess chronic excessive alcohol consumption. For administrative and legal purposes, the analytical results are compared with cut-off values recognized by regulatory authorities and scientific societies. However, it has been recently recognized that the analytical results depend on the hair sample pretreatment procedures, including the crumbling and extraction conditions. A systematic evaluation of the EtG extraction conditions from pulverized scalp hair was conducted by design of experiments (DoE) considering the extraction time, temperature, pH, and solvent composition as potential influencing factors. It was concluded that an overnight extraction at 60°C with pure water at neutral pH represents the most effective conditions to achieve high extraction yields. The absence of differential degradation of the internal standard (isotopically-labeled EtG) under such conditions was confirmed and the overall analytical method was validated according to SGWTOX and ISO17025 criteria. Twenty real hair samples with different EtG content were analyzed with three commonly accepted procedures: (a) hair manually cut in snippets and extracted at room temperature; (b) pulverized hair extracted at room temperature; (c) hair treated with the optimized method. Average increments of EtG concentration around 69% (from a to c) and 29% (from b to c) were recorded. In light of these results, the authors urge the scientific community to undertake an inter-laboratory study with the aim of defining more in detail the optimal hair EtG detection method and verifying the corresponding cut-off level for legal enforcements. This article is protected by copyright. All rights reserved.

  1. Deployment simulation of a deployable reflector for earth science application

    NASA Astrophysics Data System (ADS)

    Wang, Xiaokai; Fang, Houfei; Cai, Bei; Ma, Xiaofei

    2015-10-01

    A novel mission concept namely NEXRAD-In-Space (NIS) has been developed for monitoring hurricanes, cyclones and other severe storms from a geostationary orbit. It requires a space deployable 35-meter diameter Ka-band (35 GHz) reflector. NIS can measure hurricane precipitation intensity, dynamics and its life cycle. These information is necessary for predicting the track, intensity, rain rate and hurricane-induced floods. To meet the requirements of the radar system, a Membrane Shell Reflector Segment (MSRS) reflector technology has been developed and several technologies have been evaluated. However, the deployment analysis of this large size and high-precision reflector has not been investigated. For a pre-studies, a scaled tetrahedral truss reflector with spring driving deployment system has been made and tested, deployment dynamics analysis of this scaled reflector has been performed using ADAMS to understand its deployment dynamic behaviors. Eliminating the redundant constraints in the reflector system with a large number of moving parts is a challenging issue. A primitive joint and flexible struts were introduced to the analytical model and they can effectively eliminate over constraints of the model. By using a high-speed camera and a force transducer, a deployment experiment of a single-bay tetrahedral module has been conducted. With the tested results, an optimization process has been performed by using the parameter optimization module of ADAMS to obtain the parameters of the analytical model. These parameters were incorporated to the analytical model of the whole reflector. It is observed from the analysis results that the deployment process of the reflector with a fixed boundary experiences three stages. These stages are rapid deployment stage, slow deployment stage and impact stage. The insight of the force peak distributions of the reflector can help the optimization design of the structure.

  2. Ultrasound-assisted extraction of azadirachtin from dried entire fruits of Azadirachta indica A. Juss. (Meliaceae) and its determination by a validated HPLC-PDA method.

    PubMed

    de Paula, Joelma Abadia Marciano; Brito, Lucas Ferreira; Caetano, Karen Lorena Ferreira Neves; de Morais Rodrigues, Mariana Cristina; Borges, Leonardo Luiz; da Conceição, Edemilson Cardoso

    2016-01-01

    Azadirachta indica A. Juss., also known as neem, is a Meliaceae family tree from India. It is globally known for the insecticidal properties of its limonoid tetranortriterpenoid derivatives, such as azadirachtin. This work aimed to optimize the azadirachtin ultrasound-assisted extraction (UAE) and validate the HPLC-PDA analytical method for the measurement of this marker in neem dried fruit extracts. Box-Behnken design and response surface methodology (RSM) were used to investigate the effect of process variables on the UAE. Three independent variables, including ethanol concentration (%, w/w), temperature (°C), and material-to-solvent ratio (gmL(-1)), were studied. The azadirachtin content (µgmL(-1)), i.e., dependent variable, was quantified by the HPLC-PDA analytical method. Isocratic reversed-phase chromatography was performed using acetonitrile/water (40:60), a flow of 1.0mLmin(-1), detection at 214nm, and C18 column (250×4.6mm(2), 5µm). The primary validation parameters were determined according to ICH guidelines and Brazilian legislation. The results demonstrated that the optimal UAE condition was obtained with ethanol concentration range of 75-80% (w/w), temperature of 30°C, and material-to-solvent ratio of 0.55gmL(-1). The HPLC-PDA analytical method proved to be simple, selective, linear, precise, accurate and robust. The experimental values of azadirachtin content under optimal UAE conditions were in good agreement with the RSM predicted values and were superior to the azadirachtin content of percolated extract. Such findings suggest that UAE is a more efficient extractive process in addition to being simple, fast, and inexpensive. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  4. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    NASA Astrophysics Data System (ADS)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  5. Incorporating Aptamers in the Multiple Analyte Profiling Assays (xMAP): Detection of C-Reactive Protein.

    PubMed

    Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio

    2017-01-01

    Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.

  6. Handling qualities of large flexible control-configured aircraft

    NASA Technical Reports Server (NTRS)

    Swaim, R. L.

    1979-01-01

    The approach to an analytical study of flexible airplane longitudinal handling qualities was to parametrically vary the natural frequencies of two symmetric elastic modes to induce mode interactions with the rigid body dynamics. Since the structure of the pilot model was unknown for such dynamic interactions, the optimal control pilot modeling method is being applied and used in conjunction with pilot rating method.

  7. Optimization of the Pressurized Logistics Module - A Space Station Freedom analytical study

    NASA Technical Reports Server (NTRS)

    Scallan, J. M.

    1991-01-01

    The analysis for determining the optimum cylindrical length of the Space Station Freedom (SSF) Pressurized Logistics Module, whose task is to transport the SSF pressurized cargo via the NSTS Shuttle Orbiter, is described. The major factors considered include the NSTS net launch lift capability, the pressurized cargo requirements, and the mass properties of the module structures, mechanisms, and subsystems.

  8. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    PubMed

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  9. Optimization of solid-phase extraction and liquid chromatography-tandem mass spectrometry for simultaneous determination of capilliposide B and its active metabolite in rat urine and feces: Overcoming nonspecific binding.

    PubMed

    Cheng, Zhongzhe; Zhou, Xing; Li, Wenyi; Hu, Bingying; Zhang, Yang; Xu, Yong; Zhang, Lin; Jiang, Hongliang

    2016-11-30

    Capilliposide B, a novel oleanane triterpenoid saponin isolated from Lysimachia capillipes Hemsl, showed significant anti-tumor activities in recent studies. To characterize the excretion of Capilliposide B, a reliable liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated for simultaneous determination of Capilliposide B and its active metabolite, Capilliposide A in rat urine and feces. Sample preparation using a solid-phase extraction procedure was optimized by acidification of samples at various degrees, providing extensive sample clean-up with a high extraction recovery. In addition, rat urinary samples were pretreated with CHAPS, an anti-adsorptive agent, for overcoming nonspecific analytes adsorption during sample storage and process. The method validation was conducted over the curve range of 10.0-5000ng/ml for both analytes. The intra- and inter-day precision and accuracy of the QC samples showed ≤11.0% RSD and -10.4-12.8% relative error. The method was successfully applied to an excretion study of Capilliposide B following intravenous administration. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Design and Validation of In-Source Atmospheric Pressure Photoionization Hydrogen/Deuterium Exchange Mass Spectrometry with Continuous Feeding of D2O

    NASA Astrophysics Data System (ADS)

    Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan

    2018-01-01

    In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D2O was developed and validated. D2O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D2O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. [Figure not available: see fulltext.

  11. Development of a Suite of Analytical Tools for Energy and Water Infrastructure Knowledge Discovery

    NASA Astrophysics Data System (ADS)

    Morton, A.; Piburn, J.; Stewart, R.; Chandola, V.

    2017-12-01

    Energy and water generation and delivery systems are inherently interconnected. With demand for energy growing, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic, and demographic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This also requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. To address this need, we've developed a suite of analytical tools to support an integrated data driven modeling, analysis, and visualization capability for understanding, designing, and developing efficient local and regional practices related to the energy-water nexus. This work reviews the analytical capabilities available along with a series of case studies designed to demonstrate the potential of these tools for illuminating energy-water nexus solutions and supporting strategic (federal) policy decisions.

  12. Modeling of optical mirror and electromechanical behavior

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Lu, Chao; Liu, Zishun; Liu, Ai Q.; Zhang, Xu M.

    2001-10-01

    This paper presents finite element (FE) simulation and theoretical analysis of novel MEMS fiber-optical switches actuated by electrostatic attraction. FE simulation for the switches under static and dynamic loading are first carried out to reveal the mechanical characteristics of the minimum or critical switching voltages, the natural frequencies, mode shapes and response under different levels of electrostatic attraction load. To validate the FE simulation results, a theoretical (or analytical) model is then developed for one specific switch, i.e., Plate_40_104. Good agreement is found between the FE simulation and the analytical results. From both FE simulation and theoretical analysis, the critical switching voltage for Plate_40_104 is derived to be 238 V for the switching angel of 12 degree(s). The critical switching on and off times are 431 microsecond(s) and 67 microsecond(s) , respectively. The present study not only develops good FE and analytical models, but also demonstrates step by step a method to simplify a real optical switch structure with reference to the FE simulation results for analytical purpose. With the FE and analytical models, it is easy to obtain any information about the mechanical behaviors of the optical switches, which are helpful in yielding optimized design.

  13. Sustainability and optimal control of an exploited prey predator system through provision of alternative food to predator.

    PubMed

    Kar, T K; Ghosh, Bapan

    2012-08-01

    In the present paper, we develop a simple two species prey-predator model in which the predator is partially coupled with alternative prey. The aim is to study the consequences of providing additional food to the predator as well as the effects of harvesting efforts applied to both the species. It is observed that the provision of alternative food to predator is not always beneficial to the system. A complete picture of the long run dynamics of the system is discussed based on the effort pair as control parameters. Optimal augmentations of prey and predator biomass at final time have been investigated by optimal control theory. Also the short and large time effects of the application of optimal control have been discussed. Finally, some numerical illustrations are given to verify our analytical results with the help of different sets of parameters. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Library-based illumination synthesis for critical CMOS patterning.

    PubMed

    Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung

    2013-07-01

    In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.

  15. A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    Eldred, C. H.; Gordon, S. V.

    1976-01-01

    A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.

  16. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.

    PubMed

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  17. Optimal Control of Connected and Automated Vehicles at Roundabouts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Liuhui; Malikopoulos, Andreas; Rios-Torres, Jackeline

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulationmore » and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.« less

  18. Control of three-dimensional waves on thin liquid films. I - Optimal control and transverse mode effects

    NASA Astrophysics Data System (ADS)

    Tomlin, Ruben; Gomes, Susana; Pavliotis, Greg; Papageorgiou, Demetrios

    2017-11-01

    We consider a weakly nonlinear model for interfacial waves on three-dimensional thin films on inclined flat planes - the Kuramoto-Sivashinsky equation. The flow is driven by gravity, and is allowed to be overlying or hanging on the flat substrate. Blowing and suction controls are applied at the substrate surface. In this talk we explore the instability of the transverse modes for hanging arrangements, which are unbounded and grow exponentially. The structure of the equations allows us to construct optimal transverse controls analytically to prevent this transverse growth. In this case and the case of an overlying film, we additionally study the influence of controlling to non-trivial transverse states on the streamwise and mixed mode dynamics. Finally, we solve the full optimal control problem by deriving the first order necessary conditions for existence of an optimal control, and solving these numerically using the forward-backward sweep method.

  19. New perspectives on the dynamics of AC and DC plasma arcs exposed to cross-fields

    NASA Astrophysics Data System (ADS)

    Abdo, Youssef; Rohani, Vandad; Cauneau, François; Fulcheri, Laurent

    2017-02-01

    Interactions between an arc and external fields are crucially important for the design and the optimization of modern plasma torches. Multiple studies have been conducted to help better understand the behavior of DC and AC current arcs exposed to external and ‘self-induced’ magnetic fields, but the theoretical foundations remain very poorly explored. An analytical investigation has therefore been carried out in order to study the general behavior of DC and AC arcs under the effect of random cross-fields. A simple differential equation describing the general behavior of a planar DC or AC arc has been obtained. Several dimensionless numbers that depend primarily on arc and field parameters and the main arc characteristics (temperature, electric field strength) have also been determined. Their magnitude indicates the general tendency pattern of the arc evolution. The analytical results for many case studies have been validated using an MHD numerical model. The main purpose of this investigation was deriving a practical analytical model for the electric arc, rendering possible its stabilization and control, and the enhancement of the plasma torch power.

  20. Single-Laboratory Validation for the Determination of Flavonoids in Hawthorn Leaves and Finished Products by LC-UV.

    PubMed

    Mudge, Elizabeth M; Liu, Ying; Lund, Jensen A; Brown, Paula N

    2016-11-01

    Suitably validated analytical methods that can be used to quantify medicinally active phytochemicals in natural health products are required by regulators, manufacturers, and consumers. Hawthorn ( Crataegus ) is a botanical ingredient in natural health products used for the treatment of cardiovascular disorders. A method for the quantitation of vitexin-2″- O - rhamnoside, vitexin, isovitexin, rutin, and hyperoside in hawthorn leaf and flower raw materials and finished products was optimized and validated according to AOAC International guidelines. A two-level partial factorial study was used to guide the optimization of the sample preparation. The optimal conditions were found to be a 60-minute extraction using 50 : 48 : 2 methanol : water : acetic acid followed by a 25-minute separation using a reversed-phased liquid chromatography column with ultraviolet absorbance detection. The single-laboratory validation study evaluated method selectivity, accuracy, repeatability, linearity, limit of quantitation, and limit of detection. Individual flavonoid content ranged from 0.05 mg/g to 17.5 mg/g in solid dosage forms and raw materials. Repeatability ranged from 0.7 to 11.7 % relative standard deviation corresponding to HorRat ranges from 0.2 to 1.6. Calibration curves for each flavonoid were linear within the analytical ranges with correlation coefficients greater than 99.9 %. Herein is the first report of a validated method that is fit for the purpose of quantifying five major phytochemical marker compounds in both raw materials and finished products made from North American ( Crataegus douglasii ) and European ( Crataegus monogyna and Crataegus laevigata) hawthorn species. The method includes optimized extraction of samples without a prolonged drying process and reduced liquid chromatography separation time. Georg Thieme Verlag KG Stuttgart · New York.

  1. Parameter optimization for transitions between memory states in small arrays of Josephson junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezac, Jacob D.; Imam, Neena; Braiman, Yehuda

    Coupled arrays of Josephson junctions possess multiple stable zero voltage states. Such states can store information and consequently can be utilized for cryogenic memory applications. Basic memory operations can be implemented by sending a pulse to one of the junctions and studying transitions between the states. In order to be suitable for memory operations, such transitions between the states have to be fast and energy efficient. Here in this article we employed simulated annealing, a stochastic optimization algorithm, to study parameter optimization of array parameters which minimizes times and energies of transitions between specifically chosen states that can be utilizedmore » for memory operations (Read, Write, and Reset). Simulation results show that such transitions occur with access times on the order of 10–100 ps and access energies on the order of 10 -19–5×10 -18 J. Numerical simulations are validated with approximate analytical results.« less

  2. Biosimilars for psoriasis: clinical studies to determine similarity.

    PubMed

    Blauvelt, A; Puig, L; Chimenti, S; Vender, R; Rajagopalan, M; Romiti, R; Skov, L; Zachariae, C; Young, H; Prens, E; Cohen, A; van der Walt, J; Wu, J J

    2017-07-01

    Biosimilars are drugs that are similar, but not identical, to originator biologics. Preclinical analytical studies are required to show similarity on a molecular and structural level, but efficacy and safety studies in humans are essential to determining biosimilarity. In this review, written by members of the International Psoriasis Council, we discuss how biosimilars are evaluated in a clinical setting, with emphasis on extrapolation of indication, interchangeability and optimal clinical trial design. © 2016 British Association of Dermatologists.

  3. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  4. An analytical optimization model for infrared image enhancement via local context

    NASA Astrophysics Data System (ADS)

    Xu, Yongjian; Liang, Kun; Xiong, Yiru; Wang, Hui

    2017-12-01

    The requirement for high-quality infrared images is constantly increasing in both military and civilian areas, and it is always associated with little distortion and appropriate contrast, while infrared images commonly have some shortcomings such as low contrast. In this paper, we propose a novel infrared image histogram enhancement algorithm based on local context. By constraining the enhanced image to have high local contrast, a regularized analytical optimization model is proposed to enhance infrared images. The local contrast is determined by evaluating whether two intensities are neighbors and calculating their differences. The comparison on 8-bit images shows that the proposed method can enhance the infrared images with more details and lower noise.

  5. Optimal study design with identical power: an application of power equivalence to latent growth curve models.

    PubMed

    von Oertzen, Timo; Brandmaier, Andreas M

    2013-06-01

    Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE PAGES

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk; ...

    2016-08-01

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  7. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  8. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  9. Stability analysis of the phytoplankton effect model on changes in nitrogen concentration on integrated multi-trophic aquaculture systems

    NASA Astrophysics Data System (ADS)

    Widowati; Putro, S. P.; Silfiana

    2018-05-01

    Integrated Multi-Trophic Aquaculture (IMTA) is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The interaction between phytoplankton and nitrogen as waste in fish cultivation including ammonia, nitrite, and nitrate studied in the form of mathematical models. The form model is non-linear systems of differential equations with the four variables. The analytical analysis was used to study the dynamic behavior of this model. Local stability analysis is performed at the equilibrium point with the first step linearized model by using Taylor series, then determined the Jacobian matrix. If all eigenvalues have negative real parts, then the equilibrium of the system is locally asymptotic stable. Some numerical simulations were also demonstrated to verify our analytical result.

  10. Development of a plasma sprayed ceramic gas path seal for high pressure turbine applications

    NASA Technical Reports Server (NTRS)

    Shiembob, L. T.

    1977-01-01

    The plasma sprayed graded layered yittria stabilized zirconia (ZrO2)/metal(CoCrAlY) seal system for gas turbine blade tip applications up to 1589 K (2400 F) seal temperatures was studied. Abradability, erosion, and thermal fatigue characteristics of the graded layered system were evaluated by rig tests. Satisfactory abradability and erosion resistance was demonstrated. Encouraging thermal fatigue tolerance was shown. Initial properties for the plasma sprayed materials in the graded, layered seal system was obtained, and thermal stress analyses were performed. Sprayed residual stresses were determined. Thermal stability of the sprayed layer materials was evaluated at estimated maximum operating temperatures in each layer. Anisotropic behavior in the layer thickness direction was demonstrated by all layers. Residual stresses and thermal stability effects were not included in the analyses. Analytical results correlated reasonably well with results of the thermal fatigue tests. Analytical application of the seal system to a typical gas turbine engine application predicted performance similar to rig specimen thermal fatigue performance. A model for predicting crack propagation in the sprayed ZrO2/CoCrAlY seal system was proposed, and recommendations for improving thermal fatigue resistance were made. Seal system layer thicknesses were analytically optimized to minimize thermal stresses in the abradability specimen during thermal fatigue testing. Rig tests on the optimized seal configuration demonstrated some improvement in thermal fatigue characteristics.

  11. Solid sampling determination of magnesium in lithium niobate crystals by graphite furnace atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Dravecz, Gabriella; Laczai, Nikoletta; Hajdara, Ivett; Bencs, László

    2016-12-01

    The vaporization/atomization processes of Mg in high-resolution continuum source graphite furnace atomic absorption spectrometry (HR-CS-GFAAS) were investigated by evaporating solid (powder) samples of lithium niobate (LiNbO3) optical single crystals doped with various amounts of Mg in a transversally heated graphite atomizer (THGA). Optimal analytical conditions were attained by using the Mg I 215.4353 nm secondary spectral line. An optimal pyrolysis temperature of 1500 °C was found for Mg, while the compromise atomization temperature in THGAs (2400 °C) was applied for analyte vaporization. The calibration was performed against solid (powered) lithium niobate crystal standards. The standards were prepared with exactly known Mg content via solid state fusion of the oxide components of the matrix and analyte. The correlation coefficient (R value) of the linear calibration was not worse than 0.9992. The calibration curves were linear in the dopant concentration range of interest (0.74-7.25 mg/g Mg), when dosing 3-10 mg of the powder samples into the graphite sample insertion boats. The Mg content of the studied 19 samples was in the range of 1.69-4.13 mg/g. The precision of the method was better than 6.3%. The accuracy of the results was verified by means of flame atomic absorption spectrometry with solution sample introduction after digestion of several crystal samples.

  12. Digital filtering implementations for the detection of broad spectral features by direct analysis of passive Fourier transform infrared interferograms.

    PubMed

    Tarumi, Toshiyasu; Small, Gary W; Combs, Roger J; Kroutil, Robert T

    2004-04-01

    Finite impulse response (FIR) filters and finite impulse response matrix (FIRM) filters are evaluated for use in the detection of volatile organic compounds with wide spectral bands by direct analysis of interferogram data obtained from passive Fourier transform infrared (FT-IR) measurements. Short segments of filtered interferogram points are classified by support vector machines (SVMs) to implement the automated detection of heated plumes of the target analyte, ethanol. The interferograms employed in this study were acquired with a downward-looking passive FT-IR spectrometer mounted on a fixed-wing aircraft. Classifiers are trained with data collected on the ground and subsequently used for the airborne detection. The success of the automated detection depends on the effective removal of background contributions from the interferogram segments. Removing the background signature is complicated when the analyte spectral bands are broad because there is significant overlap between the interferogram representations of the analyte and background. Methods to implement the FIR and FIRM filters while excluding background contributions are explored in this work. When properly optimized, both filtering procedures provide satisfactory classification results for the airborne data. Missed detection rates of 8% or smaller for ethanol and false positive rates of at most 0.8% are realized. The optimization of filter design parameters, the starting interferogram point for filtering, and the length of the interferogram segments used in the pattern recognition is discussed.

  13. Development of a fast capillary electrophoresis method for determination of carbohydrates in honey samples.

    PubMed

    Rizelio, Viviane Maria; Tenfen, Laura; da Silveira, Roberta; Gonzaga, Luciano Valdemiro; Costa, Ana Carolina Oliveira; Fett, Roseane

    2012-05-15

    In this study, the determination of fructose, glucose and sucrose by capillary electrophoresis (CE) was investigated. The tendency of the analyte to undergo electromigration dispersion and the buffer capacity were evaluated using the Peakmaster(®) software and considered in the optimization of the background electrolyte, which was composed of 20 mmol L(-1) sorbic acid, 0.2 mmol L(-1) CTAB and 40 mmol L(-1) NaOH at pH 12.2. Under optimal CE conditions, the separation of the substances investigated was achieved in less than 2 min. The detection limits for the three analytes were in the range of 0.022 and 0.029 g L(-1) and precision measurements within 0.62-4.69% were achieved. The proposed methodology was applied in the quantitative analysis by direct injection of in honey samples to determine the main sugars presents. The samples were previously dissolved in deionized water and filtered with no other sample treatment. The mean values for fructose, glucose and sucrose were in the ranges of 33.65-45.46 g 100g(-1), 24.63-35.06 g 100g(-1) and <0.22-1.32 g 100g(-1), respectively. The good analytical performance of the method makes it suitable for implementation in food laboratories for the routine analysis of honey samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Optimal Medical Equipment Maintenance Service Proposal Decision Support System combining Activity Based Costing (ABC) and the Analytic Hierarchy Process (AHP).

    PubMed

    da Rocha, Leticia; Sloane, Elliot; M Bassani, Jose

    2005-01-01

    This study describes a framework to support the choice of the maintenance service (in-house or third party contract) for each category of medical equipment based on: a) the real medical equipment maintenance management system currently used by the biomedical engineering group of the public health system of the Universidade Estadual de Campinas located in Brazil to control the medical equipment maintenance service, b) the Activity Based Costing (ABC) method, and c) the Analytic Hierarchy Process (AHP) method. Results show the cost and performance related to each type of maintenance service. Decision-makers can use these results to evaluate possible strategies for the categories of equipment.

  15. Use of the Wechsler Adult Intelligence Scale Digit Span subtest for malingering detection: a meta-analytic review.

    PubMed

    Jasinski, Lindsey J; Berry, David T R; Shandera, Anni L; Clark, Jessica A

    2011-03-01

    Twenty-four studies utilizing the Wechsler Adult Intelligence Scale (WAIS) Digit Span subtest--either the Reliable Digit Span (RDS) or Age-Corrected Scaled Score (DS-ACSS) variant--for malingering detection were meta-analytically reviewed to evaluate their effectiveness in detecting malingered neurocognitive dysfunction. RDS and DS-ACSS effectively discriminated between honest responders and dissimulators, with average weighted effect sizes of 1.34 and 1.08, respectively. No significant differences were found between RDS and DS-ACSS. Similarly, no differences were found between the Digit Span subtest from the WAIS or Wechsler Memory Scale (WMS). Strong specificity and moderate sensitivity were observed, and optimal cutting scores are recommended.

  16. An optimization framework for measuring spatial access over healthcare networks.

    PubMed

    Li, Zihao; Serban, Nicoleta; Swann, Julie L

    2015-07-17

    Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.

  17. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  18. Gauging the Success of Your Web Site

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2005-01-01

    Web analytics is a way to measure and optimize Web site performance, says Jason Burby, director of Web analytics for ZAAZ Inc., a Web design and development firm in Seattle with a countrywide client base. He compares it to using Evite, which is a useful, free web service that makes it easy to send out party and other invitations and,…

  19. Operational Analysis of Time-Optimal Maneuvering for Imaging Spacecraft

    DTIC Science & Technology

    2013-03-01

    imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic Hierarchy Process (AHP)-based...the Singapore-developed X-SAT imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic...89  B.  FUTURE WORK................................................................................. 90  APPENDIX A. STK DATA AND BENEFIT

  20. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  1. Geospatial Analytics in Retail Site Selection and Sales Prediction.

    PubMed

    Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali

    2018-03-01

    Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.

  2. Simultaneous assay of multiple antibiotics in human plasma by LC-MS/MS: importance of optimizing formic acid concentration.

    PubMed

    Chen, Feng; Hu, Zhe-Yi; Laizure, S Casey; Hudson, Joanna Q

    2017-03-01

    Optimal dosing of antibiotics in critically ill patients is complicated by the development of resistant organisms requiring treatment with multiple antibiotics and alterations in systemic exposure due to diseases and extracorporeal drug removal. Developing guidelines for optimal antibiotic dosing is an important therapeutic goal requiring robust analytical methods to simultaneously measure multiple antibiotics. An LC-MS/MS assay using protein precipitation for cleanup followed by a 6-min gradient separation was developed to simultaneously determine five antibiotics in human plasma. The precision and accuracy were within the 15% acceptance range. The formic acid concentration was an important determinant of signal intensity, peak shape and matrix effects. The method was designed to be simple and successfully applied to a clinical pharmacokinetic study.

  3. Optimal Low-Thrust Limited-Power Transfers between Arbitrary Elliptic Coplanar Orbits

    NASA Technical Reports Server (NTRS)

    daSilvaFernandes, Sandro; dasChagasCarvalho, Francisco

    2007-01-01

    In this work, a complete first order analytical solution, which includes the short periodic terms, for the problem of optimal low-thrust limited-power transfers between arbitrary elliptic coplanar orbits in a Newtonian central gravity field is obtained through Hamilton-Jacobi theory and a perturbation method based on Lie series.

  4. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  5. Transient analysis of an adaptive system for optimization of design parameters

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.

  6. The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.

  7. Comparison of two optimized readout chains for low light CIS

    NASA Astrophysics Data System (ADS)

    Boukhayma, A.; Peizerat, A.; Dupret, A.; Enz, C.

    2014-03-01

    We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e-. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).

  8. Optimizing cosmological surveys in a crowded market

    NASA Astrophysics Data System (ADS)

    Bassett, Bruce A.

    2005-04-01

    Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.

  9. Engine performance analysis and optimization of a dual-mode scramjet with varied inlet conditions

    NASA Astrophysics Data System (ADS)

    Tian, Lu; Chen, Li-Hong; Chen, Qiang; Zhong, Feng-Quan; Chang, Xin-Yu

    2016-02-01

    A dual-mode scramjet can operate in a wide range of flight conditions. Higher thrust can be generated by adopting suitable combustion modes. Based on the net thrust, an analysis and preliminary optimal design of a kerosene-fueled parameterized dual-mode scramjet at a crucial flight Mach number of 6 were investigated by using a modified quasi-one-dimensional method and simulated annealing strategy. Engine structure and heat release distributions, affecting the engine thrust, were chosen as analytical parameters for varied inlet conditions (isolator entrance Mach number: 1.5-3.5). Results show that different optimal heat release distributions and structural conditions can be obtained at five different inlet conditions. The highest net thrust of the parameterized dual-mode engine can be achieved by a subsonic combustion mode at an isolator entrance Mach number of 2.5. Additionally, the effects of heat release and scramjet structure on net thrust have been discussed. The present results and the developed analytical method can provide guidance for the design and optimization of high-performance dual-mode scramjets.

  10. Multivariate Protein Signatures of Pre-Clinical Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative (ADNI) Plasma Proteome Dataset

    PubMed Central

    Johnstone, Daniel; Milward, Elizabeth A.; Berretta, Regina; Moscato, Pablo

    2012-01-01

    Background Recent Alzheimer's disease (AD) research has focused on finding biomarkers to identify disease at the pre-clinical stage of mild cognitive impairment (MCI), allowing treatment to be initiated before irreversible damage occurs. Many studies have examined brain imaging or cerebrospinal fluid but there is also growing interest in blood biomarkers. The Alzheimer's Disease Neuroimaging Initiative (ADNI) has generated data on 190 plasma analytes in 566 individuals with MCI, AD or normal cognition. We conducted independent analyses of this dataset to identify plasma protein signatures predicting pre-clinical AD. Methods and Findings We focused on identifying signatures that discriminate cognitively normal controls (n = 54) from individuals with MCI who subsequently progress to AD (n = 163). Based on p value, apolipoprotein E (APOE) showed the strongest difference between these groups (p = 2.3×10−13). We applied a multivariate approach based on combinatorial optimization ((α,β)-k Feature Set Selection), which retains information about individual participants and maintains the context of interrelationships between different analytes, to identify the optimal set of analytes (signature) to discriminate these two groups. We identified 11-analyte signatures achieving values of sensitivity and specificity between 65% and 86% for both MCI and AD groups, depending on whether APOE was included and other factors. Classification accuracy was improved by considering “meta-features,” representing the difference in relative abundance of two analytes, with an 8-meta-feature signature consistently achieving sensitivity and specificity both over 85%. Generating signatures based on longitudinal rather than cross-sectional data further improved classification accuracy, returning sensitivities and specificities of approximately 90%. Conclusions Applying these novel analysis approaches to the powerful and well-characterized ADNI dataset has identified sets of plasma biomarkers for pre-clinical AD. While studies of independent test sets are required to validate the signatures, these analyses provide a starting point for developing a cost-effective and minimally invasive test capable of diagnosing AD in its pre-clinical stages. PMID:22485168

  11. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  12. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  13. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  14. Using Fuzzy Analytic Hierarchy Process multicriteria and Geographical information system for coastal vulnerability analysis in Morocco: The case of Mohammedia

    NASA Astrophysics Data System (ADS)

    Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha

    2016-04-01

    This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.

  15. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Analytical display design for flight tasks conducted under instrument meteorological conditions. [human factors engineering of pilot performance for display device design in instrument landing systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1976-01-01

    Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.

  17. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  18. Development and validation of a simple high-performance liquid chromatography analytical method for simultaneous determination of phytosterols, cholesterol and squalene in parenteral lipid emulsions.

    PubMed

    Novak, Ana; Gutiérrez-Zamora, Mercè; Domenech, Lluís; Suñé-Negre, Josep M; Miñarro, Montserrat; García-Montoya, Encarna; Llop, Josep M; Ticó, Josep R; Pérez-Lozano, Pilar

    2018-02-01

    A simple analytical method for simultaneous determination of phytosterols, cholesterol and squalene in lipid emulsions was developed owing to increased interest in their clinical effects. Method development was based on commonly used stationary (C 18 , C 8 and phenyl) and mobile phases (mixtures of acetonitrile, methanol and water) under isocratic conditions. Differences in stationary phases resulted in peak overlapping or coelution of different peaks. The best separation of all analyzed compounds was achieved on Zorbax Eclipse XDB C 8 (150 × 4.6 mm, 5 μm; Agilent) and ACN-H 2 O-MeOH, 80:19.5:0.5 (v/v/v). In order to achieve a shorter time of analysis, the method was further optimized and gradient separation was established. The optimized analytical method was validated and tested for routine use in lipid emulsion analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Projectile motion in real-life situation: Kinematics of basketball shooting

    NASA Astrophysics Data System (ADS)

    Changjan, A.; Mueanploy, W.

    2015-06-01

    Basketball shooting is a basic practice for players. The path of the ball from the players to the hoop is projectile motion. For undergraduate introductory physics courses student must be taught about projectile motion. Basketball shooting can be used as a case study for learning projectile motion from real-life situation. In this research, we discuss the relationship between optimal angle, minimum initial velocity and the height of the ball before the player shoots the ball for basketball shooting problem analytically. We found that the value of optimal angle and minimum initial velocity decreases with increasing the height of the ball before the player shoots the ball.

  20. Novel and versatile solid-state chemiluminescence sensor based on TiO2-Ru(bpy)32+ nanoparticles for pharmaceutical drugs detection

    NASA Astrophysics Data System (ADS)

    Al-Hetlani, Entesar; Amin, Mohamed O.; Madkour, Metwally

    2018-02-01

    This work describes a novel and versatile solid-state chemiluminescence sensor for analyte detection using TiO2-Ru(bpy)32+-Ce(IV). Herein, we report the synthesis, characterization, optimization and application of a new type of hybrid nanoparticles (NPs). Mesoporous TiO2-Ru(bpy)32+ NPs were prepared using a modified sol-gel method by incorporating Ru(bpy)32+ into the initial reaction mixture at various concentrations. The resultant bright orange precipitate was characterized via transmission electron microscopy, N2 sorpometry, inductively coupled plasma-optical emission spectrometer (ICP-OES), Raman and UV-Vis spectroscopy techniques. The concentration of Ru(bpy)32+ complex in the NPs was quantified using ICP-OES, and its chemiluminescence (CL) response was measured and compared with the same concentration in the liquid phase using oxalate as model analyte. The results showed that this type of hybrid material exhibited a higher CL signal compared with the liquid phase due to the enlarged surface area of the hybrid NPs ( 149.6 m2/g). The amount of TiO2-Ru(bpy)32+ NPs and the effect of the analyte flow rate were also investigated to optimize the CL signal. The optimized system was further used to detect oxalate and two pharmaceutical drugs, namely, imipramine and promazine. The linear range for both drugs was 1-100 pm with limits of detection (LOD) of 0.1 and 0.5 pm, respectively. This approach is considered to be simple, low cost and facile and can be applied to a wide range of analytes.

  1. Simultaneous determination of macronutrients, micronutrients and trace elements in mineral fertilizers by inductively coupled plasma optical emission spectrometry

    NASA Astrophysics Data System (ADS)

    de Oliveira Souza, Sidnei; da Costa, Silvânio Silvério Lopes; Santos, Dayane Melo; dos Santos Pinto, Jéssica; Garcia, Carlos Alexandre Borges; Alves, José do Patrocínio Hora; Araujo, Rennan Geovanny Oliveira

    2014-06-01

    An analytical method for simultaneous determination of macronutrients (Ca, Mg, Na and P), micronutrients (Cu, Fe, Mn and Zn) and trace elements (Al, As, Cd, Pb and V) in mineral fertilizers was optimized. Two-level full factorial design was applied to evaluate the optimal proportions of reagents used in the sample digestion on hot plate. A Doehlert design for two variables was used to evaluate the operating conditions of the inductively coupled plasma optical emission spectrometer in order to accomplish the simultaneous determination of the analyte concentrations. The limits of quantification (LOQs) ranged from 2.0 mg kg- 1 for Mn to 77.3 mg kg- 1 for P. The accuracy and precision of the proposed method were evaluated by analysis of standard reference materials (SRMs) of Western phosphate rock (NIST 694), Florida phosphate rock (NIST 120C) and Trace elements in multi-nutrient fertilizer (NIST 695), considered to be adequate for simultaneous determination. Twenty-one samples of mineral fertilizers collected in Sergipe State, Brazil, were analyzed. For all samples, the As, Ca, Cd and Pb concentrations were below the LOQ values of the analytical method. For As, Cd and Pb the obtained LOQ values were below the maximum limit allowed by the Brazilian Ministry of Agriculture, Livestock and Food Supply (Ministério da Agricultura, Pecuária e Abastecimento - MAPA). The optimized method presented good accuracy and was effectively applied to quantitative simultaneous determination of the analytes in mineral fertilizers by inductively coupled plasma optical emission spectrometry (ICP OES).

  2. A slewing control experiment for flexible structures

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Horta, L. G.; Robertshaw, H. H.

    1985-01-01

    A hardware set-up has been developed to study slewing control for flexible structures including a steel beam and a solar panel. The linear optimal terminal control law is used to design active controllers which are implemented in an analog computer. The objective of this experiment is to demonstrate and verify the dynamics and optimal terminal control laws as applied to flexible structures for large angle maneuver. Actuation is provided by an electric motor while sensing is given by strain gages and angle potentiometer. Experimental measurements are compared with analytical predictions in terms of modal parameters of the system stability matrix and sufficient agreement is achieved to validate the theory.

  3. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics.

    PubMed

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  4. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  5. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  6. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  7. Gas chromatography fractionation platform featuring parallel flame-ionization detection and continuous high-resolution analyte collection in 384-well plates.

    PubMed

    Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen

    2016-09-02

    Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  9. On optimization of energy harvesting from base-excited vibration

    NASA Astrophysics Data System (ADS)

    Tai, Wei-Che; Zuo, Lei

    2017-12-01

    This paper re-examines and clarifies the long-believed optimization conditions of electromagnetic and piezoelectric energy harvesting from base-excited vibration. In terms of electromagnetic energy harvesting, it is typically believed that the maximum power is achieved when the excitation frequency and electrical damping equal the natural frequency and mechanical damping of the mechanical system respectively. We will show that this optimization condition is only valid when the acceleration amplitude of base excitation is constant and an approximation for small mechanical damping when the excitation displacement amplitude is constant. To this end, a two-variable optimization analysis, involving the normalized excitation frequency and electrical damping ratio, is performed to derive the exact optimization condition of each case. When the excitation displacement amplitude is constant, we analytically show that, in contrast to the long-believed optimization condition, the optimal excitation frequency and electrical damping are always larger than the natural frequency and mechanical damping ratio respectively. In particular, when the mechanical damping ratio exceeds a critical value, the optimization condition is no longer valid. Instead, the average power generally increases as the excitation frequency and electrical damping ratio increase. Furthermore, the optimization analysis is extended to consider parasitic electrical losses, which also shows different results when compared with existing literature. When the excitation acceleration amplitude is constant, on the other hand, the exact optimization condition is identical to the long-believed one. In terms of piezoelectric energy harvesting, it is commonly believed that the optimal power efficiency is achieved when the excitation and the short or open circuit frequency of the harvester are equal. Via a similar two-variable optimization analysis, we analytically show that the optimal excitation frequency depends on the mechanical damping ratio and does not equal the short or open circuit frequency. Finally, the optimal excitation frequencies and resistive loads are derived in closed-form.

  10. Practical Approaches to Forced Degradation Studies of Vaccines.

    PubMed

    Hasija, Manvi; Aboutorabian, Sepideh; Rahman, Nausheen; Ausar, Salvador F

    2016-01-01

    During the early stages of vaccine development, forced degradation studies are conducted to provide information about the degradation properties of vaccine formulations. In addition to supporting the development of analytical methods for the detection of degradation products, these stress studies are used to identify optimal long-term storage conditions and are part of the regulatory requirements for the submission of stability data. In this chapter, we provide detailed methods for forced degradation analysis under thermal, light, and mechanical stress conditions.

  11. Analytical transmission cross-coefficients for pink beam X-ray microscopy based on compound refractive lenses.

    PubMed

    Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H

    2018-01-01

    Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The impact of chief executive officer optimism on hospital strategic decision making.

    PubMed

    Langabeer, James R; Yao, Emery

    2012-01-01

    Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating greater use of behavior and cognition constructs to better depict decision-making processes in complex organizations like hospitals.

  13. Analysis and control of high-speed wheeled vehicles

    NASA Astrophysics Data System (ADS)

    Velenis, Efstathios

    In this work we reproduce driving techniques to mimic expert race drivers and obtain the open-loop control signals that may be used by auto-pilot agents driving autonomous ground wheeled vehicles. Race drivers operate their vehicles at the limits of the acceleration envelope. An accurate characterization of the acceleration capacity of the vehicle is required. Understanding and reproduction of such complex maneuvers also require a physics-based mathematical description of the vehicle dynamics. While most of the modeling issues of ground-vehicles/automobiles are already well established in the literature, lack of understanding of the physics associated with friction generation results in ad-hoc approaches to tire friction modeling. In this work we revisit this aspect of the overall vehicle modeling and develop a tire friction model that provides physical interpretation of the tire forces. The new model is free of those singularities at low vehicle speed and wheel angular rate that are inherent in the widely used empirical static models. In addition, the dynamic nature of the tire model proposed herein allows the study of dynamic effects such as transients and hysteresis. The trajectory-planning problem for an autonomous ground wheeled vehicle is formulated in an optimal control framework aiming to minimize the time of travel and maximize the use of the available acceleration capacity. The first approach to solve the optimal control problem is using numerical techniques. Numerical optimization allows incorporation of a vehicle model of high fidelity and generates realistic solutions. Such an optimization scheme provides an ideal platform to study the limit operation of the vehicle, which would not be possible via straightforward simulation. In this work we emphasize the importance of online applicability of the proposed methodologies. This underlines the need for optimal solutions that require little computational cost and are able to incorporate real, unpredictable environments. A semi-analytic methodology is developed to generate the optimal velocity profile for minimum time travel along a prescribed path. The semi-analytic nature ensures minimal computational cost while a receding horizon implementation allows application of the methodology in uncertain environments. Extensions to increase fidelity of the vehicle model are finally provided.

  14. FRET-based quantum dot immunoassay for rapid and sensitive detection of Aspergillus amstelodami.

    PubMed

    Kattke, Michele D; Gao, Elizabeth J; Sapsford, Kim E; Stephenson, Larry D; Kumar, Ashok

    2011-01-01

    In this study, a fluorescence resonance energy transfer (FRET)-based quantum dot (QD) immunoassay for detection and identification of Aspergillus amstelodami was developed. Biosensors were formed by conjugating QDs to IgG antibodies and incubating with quencher-labeled analytes; QD energy was transferred to the quencher species through FRET, resulting in diminished fluorescence from the QD donor. During a detection event, quencher-labeled analytes are displaced by higher affinity target analytes, creating a detectable fluorescence signal increase from the QD donor. Conjugation and the resulting antibody:QD ratios were characterized with UV-Vis spectroscopy and QuantiT protein assay. The sensitivity of initial fluorescence experiments was compromised by inherent autofluorescence of mold spores, which produced low signal-to-noise and inconsistent readings. Therefore, excitation wavelength, QD, and quencher were adjusted to provide optimal signal-to-noise over spore background. Affinities of anti-Aspergillus antibody for different mold species were estimated with sandwich immunoassays, which identified A. fumigatus and A. amstelodami for use as quencher-labeled- and target-analytes, respectively. The optimized displacement immunoassay detected A. amstelodami concentrations as low as 10(3) spores/mL in five minutes or less. Additionally, baseline fluorescence was produced in the presence of 10(5) CFU/mL heat-killed E. coli O157:H7, demonstrating high specificity. This sensing modality may be useful for identification and detection of other biological threat agents, pending identification of suitable antibodies. Overall, these FRET-based QD-antibody biosensors represent a significant advancement in detection capabilities, offering sensitive and reliable detection of targets with applications in areas from biological terrorism defense to clinical analysis.

  15. Simultaneous determination of 11 antibiotics and their main metabolites from four different groups by reversed-phase high-performance liquid chromatography-diode array-fluorescence (HPLC-DAD-FLD) in human urine samples.

    PubMed

    Fernandez-Torres, R; Consentino, M Olías; Lopez, M A Bello; Mochon, M Callejon

    2010-05-15

    A new, accurate and sensitive reversed-phase high-performance liquid chromatography (RP-HPLC) as analytical method for the quantitative determination of 11 antibiotics (drugs) and the main metabolites of five of them present in human urine has been worked out, optimized and validated. The analytes belong to four different groups of antibiotics (sulfonamides, tetracyclines, penicillins and anphenicols). The analyzed compounds were sulfadiazine (SDI) and its N(4)-acetylsulfadiazine (NDI) metabolite, sulfamethazine (SMZ) and its N(4)-acetylsulfamethazine (NMZ), sulfamerazine (SMR) and its N(4)-acetylsulfamerazine (NMR), sulfamethoxazole (SMX), trimetroprim (TMP), amoxicillin (AMX) and its main metabolite amoxicilloic acid (AMA), ampicillin (AMP) and its main metabolite ampicilloic acid (APA), chloramphenicol (CLF), thiamphenicol (TIF), oxytetracycline (OXT) and chlortetracycline (CLT). For HPLC analysis, diode array (DAD) and fluorescence (FLD) detectors were used. The separation of the analyzed compounds was conducted by means of a Phenomenex Gemini C(18) (150mm x 4.6mm I.D., particle size 5microm) analytical column with LiChroCART LiChrospher C(18) (4mm x 4mm, particle size 5microm) guard column. Analyzed drugs were determined within 34min using formic acid 0.1% in water and acetonitrile in gradient elution mode as mobile phase. A linear response was observed for all compounds in the range of concentration studied. Two procedures were optimized for sample preparation: a direct treatment with methanol and acetonitrile and a solid phase extraction procedure using Bond Elut Plexa columns. The method was applied to the determination of the analytes in human urine from volunteers under treatment with different pharmaceutical formulations. This method can be successfully applied to routine determination of all these drugs in human urine samples.

  16. FRET-Based Quantum Dot Immunoassay for Rapid and Sensitive Detection of Aspergillus amstelodami

    PubMed Central

    Kattke, Michele D.; Gao, Elizabeth J.; Sapsford, Kim E.; Stephenson, Larry D.; Kumar, Ashok

    2011-01-01

    In this study, a fluorescence resonance energy transfer (FRET)-based quantum dot (QD) immunoassay for detection and identification of Aspergillus amstelodami was developed. Biosensors were formed by conjugating QDs to IgG antibodies and incubating with quencher-labeled analytes; QD energy was transferred to the quencher species through FRET, resulting in diminished fluorescence from the QD donor. During a detection event, quencher-labeled analytes are displaced by higher affinity target analytes, creating a detectable fluorescence signal increase from the QD donor. Conjugation and the resulting antibody:QD ratios were characterized with UV-Vis spectroscopy and QuantiT protein assay. The sensitivity of initial fluorescence experiments was compromised by inherent autofluorescence of mold spores, which produced low signal-to-noise and inconsistent readings. Therefore, excitation wavelength, QD, and quencher were adjusted to provide optimal signal-to-noise over spore background. Affinities of anti-Aspergillus antibody for different mold species were estimated with sandwich immunoassays, which identified A. fumigatus and A. amstelodami for use as quencher-labeled- and target-analytes, respectively. The optimized displacement immunoassay detected A. amstelodami concentrations as low as 103 spores/mL in five minutes or less. Additionally, baseline fluorescence was produced in the presence of 105 CFU/mL heat-killed E. coli O157:H7, demonstrating high specificity. This sensing modality may be useful for identification and detection of other biological threat agents, pending identification of suitable antibodies. Overall, these FRET-based QD-antibody biosensors represent a significant advancement in detection capabilities, offering sensitive and reliable detection of targets with applications in areas from biological terrorism defense to clinical analysis. PMID:22163961

  17. Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur

    2013-09-01

    Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)], 10.1063/1.3665134 are presented. The OMP3 method is applied to problematic chemical systems with challenging electronic structures. The performance of the OMP3 method is compared with those of canonical second-order Møller-Plesset perturbation theory (MP2), third-order Møller-Plesset perturbation theory (MP3), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] for investigating equilibrium geometries, vibrational frequencies, and open-shell reaction energies. For bond lengths, the performance of OMP3 is in between those of MP3 and CCSD. For harmonic vibrational frequencies, the OMP3 method significantly eliminates the singularities arising from the abnormal response contributions observed for MP3 in case of symmetry-breaking problems, and provides noticeably improved vibrational frequencies for open-shell molecules. For open-shell reaction energies, OMP3 exhibits a better performance than MP3 and CCSD as in case of barrier heights and radical stabilization energies. As discussed in previous studies, the OMP3 method is several times faster than CCSD in energy computations. Further, in analytic gradient computations for the CCSD method one needs to solve λ-amplitude equations, however for OMP3 one does not since λ _{ab}^{ij(1)} = t_{ij}^{ab(1)} and λ _{ab}^{ij(2)} = t_{ij}^{ab(2)}. Additionally, one needs to solve orbital Z-vector equations for CCSD, but for OMP3 orbital response contributions are zero owing to the stationary property of OMP3. Overall, for analytic gradient computations the OMP3 method is several times less expensive than CCSD (roughly ˜4-6 times). Considering the balance of computational cost and accuracy we conclude that the OMP3 method emerges as a very useful tool for the study of electronically challenging chemical systems.

  18. Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory.

    PubMed

    Bozkaya, Uğur

    2013-09-14

    Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)] are presented. The OMP3 method is applied to problematic chemical systems with challenging electronic structures. The performance of the OMP3 method is compared with those of canonical second-order Møller-Plesset perturbation theory (MP2), third-order Møller-Plesset perturbation theory (MP3), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] for investigating equilibrium geometries, vibrational frequencies, and open-shell reaction energies. For bond lengths, the performance of OMP3 is in between those of MP3 and CCSD. For harmonic vibrational frequencies, the OMP3 method significantly eliminates the singularities arising from the abnormal response contributions observed for MP3 in case of symmetry-breaking problems, and provides noticeably improved vibrational frequencies for open-shell molecules. For open-shell reaction energies, OMP3 exhibits a better performance than MP3 and CCSD as in case of barrier heights and radical stabilization energies. As discussed in previous studies, the OMP3 method is several times faster than CCSD in energy computations. Further, in analytic gradient computations for the CCSD method one needs to solve λ-amplitude equations, however for OMP3 one does not since λ(ab)(ij(1))=t(ij)(ab(1)) and λ(ab)(ij(2))=t(ij)(ab(2)). Additionally, one needs to solve orbital Z-vector equations for CCSD, but for OMP3 orbital response contributions are zero owing to the stationary property of OMP3. Overall, for analytic gradient computations the OMP3 method is several times less expensive than CCSD (roughly ~4-6 times). Considering the balance of computational cost and accuracy we conclude that the OMP3 method emerges as a very useful tool for the study of electronically challenging chemical systems.

  19. Artificial Intelligence Methods in Pursuit Evasion Differential Games

    DTIC Science & Technology

    1990-07-30

    objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of

  20. Integrated Analytical Evaluation and Optimization of Model Parameters against Preprocessed Measurement Data

    DTIC Science & Technology

    1989-06-23

    Iterations .......................... 86 3.2 Comparison between MACH and POLAR ......................... 90 3.3 Flow Chart for VSTS Algorithm...The most recent changes are: a) development of the VSTS (velocity space topology search) algorithm for calculating particle densities b) extension...with simple analytic models. The largest modification of the MACH code was the implementation of the VSTS procedure, which constituted a complete

  1. [Basic research on digital logistic management of hospital].

    PubMed

    Cao, Hui

    2010-05-01

    This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.

  2. How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.

    PubMed

    Wiczling, Paweł; Kaliszan, Roman

    2016-01-05

    In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.

  3. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  4. Tables Of Gaussian-Type Orbital Basis Functions

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1992-01-01

    NASA technical memorandum contains tables of estimated Hartree-Fock wave functions for atoms lithium through neon and potassium through krypton. Sets contain optimized Gaussian-type orbital exponents and coefficients, and near Hartree-Fock quality. Orbital exponents optimized by minimizing restricted Hartree-Fock energy via scaled Newton-Raphson scheme in which Hessian evaluated numerically by use of analytically determined gradients.

  5. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  6. Optimization of chiral lattice based metastructures for broadband vibration suppression using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.

    2016-05-01

    One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.

  7. Soliton and kink jams in traffic flow with open boundaries.

    PubMed

    Muramatsu, M; Nagatani, T

    1999-07-01

    Soliton density wave is investigated numerically and analytically in the optimal velocity model (a car-following model) of a one-dimensional traffic flow with open boundaries. Soliton density wave is distinguished from the kink density wave. It is shown that the soliton density wave appears only at the threshold of occurrence of traffic jams. The Korteweg-de Vries (KdV) equation is derived from the optimal velocity model by the use of the nonlinear analysis. It is found that the traffic soliton appears only near the neutral stability line. The soliton solution is analytically obtained from the perturbed KdV equation. It is shown that the soliton solution obtained from the nonlinear analysis is consistent with that of the numerical simulation.

  8. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  9. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  11. Analysis of flavonoids from lotus (Nelumbo nucifera) leaves using high performance liquid chromatography/photodiode array detector tandem electrospray ionization mass spectrometry and an extraction method optimized by orthogonal design.

    PubMed

    Chen, Sha; Wu, Ben-Hong; Fang, Jin-Bao; Liu, Yan-Ling; Zhang, Hao-Hao; Fang, Lin-Chuan; Guan, Le; Li, Shao-Hua

    2012-03-02

    The extraction protocol of flavonoids from lotus (Nelumbo nucifera) leaves was optimized through an orthogonal design. The solvent was the most important factor comparing solvent, solvent:tissue ratio, extraction time, and temperature. The highest yield of flavonoids was achieved with 70% methanol-water and a solvent:tissue ratio of 30:1 at 4 °C for 36 h. The optimized analytical method for HPLC was a multi-step gradient elution using 0.5% formic acid (A) and CH₃CN containing 0.1% formic acid (B), at a flow rate of 0.6 mL/min. Using this optimized method, thirteen flavonoids were simultaneously separated and identified by high performance liquid chromatography coupled with photodiode array detection/electrospray ionization mass spectrometry (HPLC/DAD/ESI-MS(n)). Five of the bioactive compounds are reported in lotus leaves for the first time. The flavonoid content of the leaves of three representative cultivars was assessed under the optimized extraction and HPLC analytical conditions, and the seed-producing cultivar 'Baijianlian' had the highest flavonoid content compared with rhizome-producing 'Zhimahuoulian' and wild floral cultivar 'Honglian'. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Holistic irrigation water management approach based on stochastic soil water dynamics

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. The area suffers from the water scarcity problem and therefore the trade-off between the level of deficit and economical profit should be assessed. Based on the results, while the maximum net benefit has been obtained for the stress-avoidance (SA) irrigation policy, the highest water profitability, defined by economical net benefit gained from unit irrigation water volume application, has been resulted when only about 60% of water used in the SA policy is applied.

  13. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  14. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  15. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  16. Generalized Subset Designs in Analytical Chemistry.

    PubMed

    Surowiec, Izabella; Vikström, Ludvig; Hector, Gustaf; Johansson, Erik; Vikström, Conny; Trygg, Johan

    2017-06-20

    Design of experiments (DOE) is an established methodology in research, development, manufacturing, and production for screening, optimization, and robustness testing. Two-level fractional factorial designs remain the preferred approach due to high information content while keeping the number of experiments low. These types of designs, however, have never been extended to a generalized multilevel reduced design type that would be capable to include both qualitative and quantitative factors. In this Article we describe a novel generalized fractional factorial design. In addition, it also provides complementary and balanced subdesigns analogous to a fold-over in two-level reduced factorial designs. We demonstrate how this design type can be applied with good results in three different applications in analytical chemistry including (a) multivariate calibration using microwave resonance spectroscopy for the determination of water in tablets, (b) stability study in drug product development, and (c) representative sample selection in clinical studies. This demonstrates the potential of generalized fractional factorial designs to be applied in many other areas of analytical chemistry where representative, balanced, and complementary subsets are required, especially when a combination of quantitative and qualitative factors at multiple levels exists.

  17. A Study of Driver's Route Choice Behavior Based on Evolutionary Game Theory

    PubMed Central

    Jiang, Xiaowei; Ji, Yanjie; Deng, Wei

    2014-01-01

    This paper proposes a route choice analytic method that embeds cumulative prospect theory in evolutionary game theory to analyze how the drivers adjust their route choice behaviors under the influence of the traffic information. A simulated network with two alternative routes and one variable message sign is built to illustrate the analytic method. We assume that the drivers in the transportation system are bounded rational, and the traffic information they receive is incomplete. An evolutionary game model is constructed to describe the evolutionary process of the drivers' route choice decision-making behaviors. Here we conclude that the traffic information plays an important role in the route choice behavior. The driver's route decision-making process develops towards different evolutionary stable states in accordance with different transportation situations. The analysis results also demonstrate that employing cumulative prospect theory and evolutionary game theory to study the driver's route choice behavior is effective. This analytic method provides an academic support and suggestion for the traffic guidance system, and may optimize the travel efficiency to a certain extent. PMID:25610455

  18. Tools for studying dry-cured ham processing by using computed tomography.

    PubMed

    Santos-Garcés, Eva; Muñoz, Israel; Gou, Pere; Sala, Xavier; Fulladosa, Elena

    2012-01-11

    An accurate knowledge and optimization of dry-cured ham elaboration processes could help to reduce operating costs and maximize product quality. The development of nondestructive tools to characterize chemical parameters such as salt and water contents and a(w) during processing is of special interest. In this paper, predictive models for salt content (R(2) = 0.960 and RMSECV = 0.393), water content (R(2) = 0.912 and RMSECV = 1.751), and a(w) (R(2) = 0.906 and RMSECV = 0.008), which comprise the whole elaboration process, were developed. These predictive models were used to develop analytical tools such as distribution diagrams, line profiles, and regions of interest (ROIs) from the acquired computed tomography (CT) scans. These CT analytical tools provided quantitative information on salt, water, and a(w) in terms of content but also distribution throughout the process. The information obtained was applied to two industrial case studies. The main drawback of the predictive models and CT analytical tools is the disturbance that fat produces in water content and a(w) predictions.

  19. A study of driver's route choice behavior based on evolutionary game theory.

    PubMed

    Jiang, Xiaowei; Ji, Yanjie; Du, Muqing; Deng, Wei

    2014-01-01

    This paper proposes a route choice analytic method that embeds cumulative prospect theory in evolutionary game theory to analyze how the drivers adjust their route choice behaviors under the influence of the traffic information. A simulated network with two alternative routes and one variable message sign is built to illustrate the analytic method. We assume that the drivers in the transportation system are bounded rational, and the traffic information they receive is incomplete. An evolutionary game model is constructed to describe the evolutionary process of the drivers' route choice decision-making behaviors. Here we conclude that the traffic information plays an important role in the route choice behavior. The driver's route decision-making process develops towards different evolutionary stable states in accordance with different transportation situations. The analysis results also demonstrate that employing cumulative prospect theory and evolutionary game theory to study the driver's route choice behavior is effective. This analytic method provides an academic support and suggestion for the traffic guidance system, and may optimize the travel efficiency to a certain extent.

  20. Boron doped diamond sensor for sensitive determination of metronidazole: Mechanistic and analytical study by cyclic voltammetry and square wave voltammetry.

    PubMed

    Ammar, Hafedh Belhadj; Brahim, Mabrouk Ben; Abdelhédi, Ridha; Samet, Youssef

    2016-02-01

    The performance of boron-doped diamond (BDD) electrode for the detection of metronidazole (MTZ) as the most important drug of the group of 5-nitroimidazole was proven using cyclic voltammetry (CV) and square wave voltammetry (SWV) techniques. A comparison study between BDD, glassy carbon and silver electrodes on the electrochemical response was carried out. The process is pH-dependent. In neutral and alkaline media, one irreversible reduction peak related to the hydroxylamine derivative formation was registered, involving a total of four electrons. In acidic medium, a prepeak appears probably related to the adsorption affinity of hydroxylamine at the electrode surface. The BDD electrode showed higher sensitivity and reproducibility analytical response, compared with the other electrodes. The higher reduction peak current was registered at pH11. Under optimal conditions, a linear analytical curve was obtained for the MTZ concentration in the range of 0.2-4.2μmolL(-1), with a detection limit of 0.065μmolL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Trajectory optimization for the National Aerospace Plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1993-01-01

    The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.

  2. Study of noise transmission through double wall aircraft windows

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1983-01-01

    Analytical and experimental procedures were used to predict the noise transmitted through double wall windows into the cabin of a twin-engine G/A aircraft. The analytical model was applied to optimize cabin noise through parametric variation of the structural and acoustic parameters. The parametric study includes mass addition, increase in plexiglass thickness, decrease in window size, increase in window cavity depth, depressurization of the space between the two window plates, replacement of the air cavity with a transparent viscoelastic material, change in stiffness of the plexiglass material, and different absorptive materials for the interior walls of the cabin. It was found that increasing the exterior plexiglass thickness and/or decreasing the total window size could achieve the proper amount of noise reduction for this aircraft. The total added weight to the aircraft is then about 25 lbs.

  3. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  4. Optimization of the imaging response of scanning microwave microscopy measurements

    NASA Astrophysics Data System (ADS)

    Sardi, G. M.; Lucibello, A.; Kasper, M.; Gramse, G.; Proietti, E.; Kienberger, F.; Marcelli, R.

    2015-07-01

    In this work, we present the analytical modeling and preliminary experimental results for the choice of the optimal frequencies when performing amplitude and phase measurements with a scanning microwave microscope. In particular, the analysis is related to the reflection mode operation of the instrument, i.e., the acquisition of the complex reflection coefficient data, usually referred as S11. The studied configuration is composed of an atomic force microscope with a microwave matched nanometric cantilever probe tip, connected by a λ/2 coaxial cable resonator to a vector network analyzer. The set-up is provided by Keysight Technologies. As a peculiar result, the optimal frequencies, where the maximum sensitivity is achieved, are different for the amplitude and for the phase signals. The analysis is focused on measurements of dielectric samples, like semiconductor devices, textile pieces, and biological specimens.

  5. Design of sidewall treatment of cabin noise control of a twin engine turboprop aircraft

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.; Slazak, M.

    1983-01-01

    An analytical procedure was used to predict the noise transmission into the cabin of a twin engine general aviation aircraft. This model was then used to optimize the interior A weighted noise levels to an average value of about 85 dBA. The surface pressure noise spectral levels were selected utilizing experimental flight data and empirical predictions. The add on treatments considered in this optimization study include aluminum honeycomb panels, constrained layer damping tape, porous acoustic blankets, acoustic foams, septum barriers and limp trim panels which are isolated from the vibration of the main sidewall structure. To reduce the average noise level in the cabin from about 102 kBA (baseline) to 85 dBA (optimized), the added weight of the noise control treatment is about 2% of the total gross takeoff weight of the aircraft.

  6. Design of sidewall treatment of cabin noise control of a twin engine turboprop aircraft

    NASA Astrophysics Data System (ADS)

    Vaicaitis, R.; Slazak, M.

    1983-12-01

    An analytical procedure was used to predict the noise transmission into the cabin of a twin engine general aviation aircraft. This model was then used to optimize the interior A weighted noise levels to an average value of about 85 dBA. The surface pressure noise spectral levels were selected utilizing experimental flight data and empirical predictions. The add on treatments considered in this optimization study include aluminum honeycomb panels, constrained layer damping tape, porous acoustic blankets, acoustic foams, septum barriers and limp trim panels which are isolated from the vibration of the main sidewall structure. To reduce the average noise level in the cabin from about 102 kBA (baseline) to 85 dBA (optimized), the added weight of the noise control treatment is about 2% of the total gross takeoff weight of the aircraft.

  7. Pushing quantitation limits in micro UHPLC-MS/MS analysis of steroid hormones by sample dilution using high volume injection.

    PubMed

    Márta, Zoltán; Bobály, Balázs; Fekete, Jenő; Magda, Balázs; Imre, Tímea; Mészáros, Katalin Viola; Szabó, Pál Tamás

    2016-09-10

    Ultratrace analysis of sample components requires excellent analytical performance in terms of limits of quantitation (LoQ). Micro UHPLC coupling with sensitive tandem mass spectrometry provides state of the art solutions for such analytical problems. Decreased column volume in micro LC limits the injectable sample volume. However, if analyte concentration is extremely low, it might be necessary to inject high sample volumes. This is particularly critical for strong sample solvents and weakly retained analytes, which are often the case when preparing biological samples (protein precipitation, sample extraction, etc.). In that case, high injection volumes may cause band broadening, peak distortion or even elution in dead volume. In this study, we evaluated possibilities of high volume injection onto microbore RP-LC columns, when sample solvent is diluted. The presented micro RP-LC-MS/MS method was optimized for the analysis of steroid hormones from human plasma after protein precipitation with organic solvents. A proper sample dilution procedure helps to increase the injection volume without compromising peak shapes. Finally, due to increased injection volume, the limit of quantitation can be decreased by a factor of 2-5, depending on the analytes and the experimental conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Optimization of Passive Low Power Wireless Electromagnetic Energy Harvesters

    PubMed Central

    Nimo, Antwi; Grgić, Dario; Reindl, Leonhard M.

    2012-01-01

    This work presents the optimization of antenna captured low power radio frequency (RF) to direct current (DC) power converters using Schottky diodes for powering remote wireless sensors. Linearized models using scattering parameters show that an antenna and a matched diode rectifier can be described as a form of coupled resonator with different individual resonator properties. The analytical models show that the maximum voltage gain of the coupled resonators is mainly related to the antenna, diode and load (remote sensor) resistances at matched conditions or resonance. The analytical models were verified with experimental results. Different passive wireless RF power harvesters offering high selectivity, broadband response and high voltage sensitivity are presented. Measured results show that with an optimal resistance of antenna and diode, it is possible to achieve high RF to DC voltage sensitivity of 0.5 V and efficiency of 20% at −30 dBm antenna input power. Additionally, a wireless harvester (rectenna) is built and tested for receiving range performance. PMID:23202014

  9. Optimization of passive low power wireless electromagnetic energy harvesters.

    PubMed

    Nimo, Antwi; Grgić, Dario; Reindl, Leonhard M

    2012-10-11

    This work presents the optimization of antenna captured low power radio frequency (RF) to direct current (DC) power converters using Schottky diodes for powering remote wireless sensors. Linearized models using scattering parameters show that an antenna and a matched diode rectifier can be described as a form of coupled resonator with different individual resonator properties. The analytical models show that the maximum voltage gain of the coupled resonators is mainly related to the antenna, diode and load (remote sensor) resistances at matched conditions or resonance. The analytical models were verified with experimental results. Different passive wireless RF power harvesters offering high selectivity, broadband response and high voltage sensitivity are presented. Measured results show that with an optimal resistance of antenna and diode, it is possible to achieve high RF to DC voltage sensitivity of 0.5 V and efficiency of 20% at -30 dBm antenna input power. Additionally, a wireless harvester (rectenna) is built and tested for receiving range performance.

  10. X-ray optics simulation and beamline design for the APS upgrade

    NASA Astrophysics Data System (ADS)

    Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean

    2017-08-01

    The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.

  11. Quantum dot laser optimization: selectively doped layers

    NASA Astrophysics Data System (ADS)

    Korenev, Vladimir V.; Konoplev, Sergey S.; Savelyev, Artem V.; Shernyakov, Yurii M.; Maximov, Mikhail V.; Zhukov, Alexey E.

    2016-08-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics.

  12. Symmetry breaking in optimal timing of traffic signals on an idealized two-way street.

    PubMed

    Panaggio, Mark J; Ottino-Löffler, Bertand J; Hu, Peiguang; Abrams, Daniel M

    2013-09-01

    Simple physical models based on fluid mechanics have long been used to understand the flow of vehicular traffic on freeways; analytically tractable models of flow on an urban grid, however, have not been as extensively explored. In an ideal world, traffic signals would be timed such that consecutive lights turned green just as vehicles arrived, eliminating the need to stop at each block. Unfortunately, this "green-wave" scenario is generally unworkable due to frustration imposed by competing demands of traffic moving in different directions. Until now this has typically been resolved by numerical simulation and optimization. Here, we develop a theory for the flow in an idealized system consisting of a long two-way road with periodic intersections. We show that optimal signal timing can be understood analytically and that there are counterintuitive asymmetric solutions to this signal coordination problem. We further explore how these theoretical solutions degrade as traffic conditions vary and automotive density increases.

  13. Symmetry breaking in optimal timing of traffic signals on an idealized two-way street

    NASA Astrophysics Data System (ADS)

    Panaggio, Mark J.; Ottino-Löffler, Bertand J.; Hu, Peiguang; Abrams, Daniel M.

    2013-09-01

    Simple physical models based on fluid mechanics have long been used to understand the flow of vehicular traffic on freeways; analytically tractable models of flow on an urban grid, however, have not been as extensively explored. In an ideal world, traffic signals would be timed such that consecutive lights turned green just as vehicles arrived, eliminating the need to stop at each block. Unfortunately, this “green-wave” scenario is generally unworkable due to frustration imposed by competing demands of traffic moving in different directions. Until now this has typically been resolved by numerical simulation and optimization. Here, we develop a theory for the flow in an idealized system consisting of a long two-way road with periodic intersections. We show that optimal signal timing can be understood analytically and that there are counterintuitive asymmetric solutions to this signal coordination problem. We further explore how these theoretical solutions degrade as traffic conditions vary and automotive density increases.

  14. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  15. Dominating Scale-Free Networks Using Generalized Probabilistic Methods

    PubMed Central

    Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.

    2014-01-01

    We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937

  16. A simple way to synthesize large-scale Cu2O/Ag nanoflowers for ultrasensitive surface-enhanced Raman scattering detection

    NASA Astrophysics Data System (ADS)

    Zou, Junyan; Song, Weijia; Xie, Weiguang; Huang, Bo; Yang, Huidong; Luo, Zhi

    2018-03-01

    Here, we report a simple strategy to prepare highly sensitive surface-enhanced Raman spectroscopy (SERS) substrates based on Ag decorated Cu2O nanoparticles by combining two common techniques, viz, thermal oxidation growth of Cu2O nanoparticles and magnetron sputtering fabrication of a Ag nanoparticle film. Methylene blue is used as the Raman analyte for the SERS study, and the substrates fabricated under optimized conditions have very good sensitivity (analytical enhancement factor ˜108), stability, and reproducibility. A linear dependence of the SERS intensities with the concentration was obtained with an R 2 value >0.9. These excellent properties indicate that the substrate has great potential in the detection of biological and chemical substances.

  17. Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing

    NASA Astrophysics Data System (ADS)

    Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin

    2017-06-01

    This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.

  18. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  19. Artificial immune system for effective properties optimization of magnetoelectric composites

    NASA Astrophysics Data System (ADS)

    Poteralski, Arkadiusz; Dziatkiewicz, Grzegorz

    2018-01-01

    The optimization problem of the effective properties for magnetoelectric composites is considered. The effective properties are determined by the semi-analytical Mori-Tanaka approach. The generalized Eshelby tensor components are calculated numerically by using the Gauss quadrature method for the integral representation of the inclusion problem. The linear magnetoelectric constitutive equation is used. The effect of orientation of the electromagnetic materials components is taken into account. The optimization problem of the design is formulated and the artificial immune system is applied to solve it.

  20. Converging Towards the Optimal Path to Extinction

    DTIC Science & Technology

    2011-01-01

    the reproductive rate R0 should be greater than but very close to 1. However, most real diseases have R0 larger than 1.5, which translates into a...can analytically find an expression for the action along the optimal path. The expression for the action is a function of k and the reproductive number...the optimal path for a range of values of the reproductive number R0. In contrast to the prior two examples, here the action must be computed

Top