Sample records for analytical optimization study

  1. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  2. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  3. Gradient Optimization for Analytic conTrols - GOAT

    NASA Astrophysics Data System (ADS)

    Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.

  4. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  5. Optimism and Physical Health: A Meta-analytic Review

    PubMed Central

    Rasmussen, Heather N.; Greenhouse, Joel B.

    2010-01-01

    Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142

  6. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  7. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  8. Analytical solutions to optimal underactuated spacecraft formation reconfiguration

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2015-11-01

    Underactuated systems can generally be defined as systems with fewer number of control inputs than that of the degrees of freedom to be controlled. In this paper, analytical solutions to optimal underactuated spacecraft formation reconfiguration without either the radial or the in-track control are derived. By using a linear dynamical model of underactuated spacecraft formation in circular orbits, controllability analysis is conducted for either underactuated case. Indirect optimization methods based on the minimum principle are then introduced to generate analytical solutions to optimal open-loop underactuated reconfiguration problems. Both fixed and free final conditions constraints are considered for either underactuated case and comparisons between these two final conditions indicate that the optimal control strategies with free final conditions require less control efforts than those with the fixed ones. Meanwhile, closed-loop adaptive sliding mode controllers for both underactuated cases are designed to guarantee optimal trajectory tracking in the presence of unmatched external perturbations, linearization errors, and system uncertainties. The adaptation laws are designed via a Lyapunov-based method to ensure the overall stability of the closed-loop system. The explicit expressions of the terminal convergent regions of each system states have also been obtained. Numerical simulations demonstrate the validity and feasibility of the proposed open-loop and closed-loop control schemes for optimal underactuated spacecraft formation reconfiguration in circular orbits.

  9. Predictive Analytics for Coordinated Optimization in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  10. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  11. A Requirements-Driven Optimization Method for Acoustic Liners Using Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Lopes, Leonard V.

    2017-01-01

    More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. In a previous paper on this subject, a method deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external optimizer. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an optimization process. Analytic derivatives improve the efficiency and accuracy of gradient-based optimization methods and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective optimization.

  12. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  13. Analytic Optimization of Near-Field Optical Chirality Enhancement

    PubMed Central

    2017-01-01

    We present an analytic derivation for the enhancement of local optical chirality in the near field of plasmonic nanostructures by tuning the far-field polarization of external light. We illustrate the results by means of simulations with an achiral and a chiral nanostructure assembly and demonstrate that local optical chirality is significantly enhanced with respect to circular polarization in free space. The optimal external far-field polarizations are different from both circular and linear. Symmetry properties of the nanostructure can be exploited to determine whether the optimal far-field polarization is circular. Furthermore, the optimal far-field polarization depends on the frequency, which results in complex-shaped laser pulses for broadband optimization. PMID:28239617

  14. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  15. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  16. Cocontraction of pairs of antagonistic muscles: analytical solution for planar static nonlinear optimization approaches.

    PubMed

    Herzog, W; Binding, P

    1993-11-01

    It has been stated in the literature that static, nonlinear optimization approaches cannot predict coactivation of pairs of antagonistic muscles; however, numerical solutions of such approaches have predicted coactivation of pairs of one-joint and multijoint antagonists. Analytical support for either finding is not available in the literature for systems containing more than one degree of freedom. The purpose of this study was to investigate analytically the possibility of cocontraction of pairs of antagonistic muscles using a static nonlinear optimization approach for a multidegree-of-freedom, two-dimensional system. Analytical solutions were found using the Karush-Kuhn-Tucker conditions, which were necessary and sufficient for optimality in this problem. The results show that cocontraction of pairs of one-joint antagonistic muscles is not possible, whereas cocontraction of pairs of multijoint antagonists is. These findings suggest that cocontraction of pairs of antagonistic muscles may be an "efficient" way to accomplish many movement tasks.

  17. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  18. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  19. Optimal starting conditions for the rendezvous maneuver: Analytical and computational approach

    NASA Astrophysics Data System (ADS)

    Ciarcia, Marco

    by the optimal trajectory. For the guidance trajectory, because of the replacement of the variable thrust direction of the powered subarc with a constant thrust direction, the optimal control problem degenerates into a mathematical programming problem with a relatively small number of degrees of freedom, more precisely: three for case (i) time-to-rendezvous free and two for case (ii) time-to-rendezvous given. In particular, we consider the rendezvous between the Space Shuttle (chaser) and the International Space Station (target). Once a given initial distance SS-to-ISS is preselected, the present work supplies not only the best initial conditions for the rendezvous trajectory, but simultaneously the corresponding final conditions for the ascent trajectory. In Part B, an analytical solution of the Clohessy-Wiltshire equations is presented (i) neglecting the change of the spacecraft mass due to the fuel consumption and (ii) and assuming that the thrust is finite, that is, the trajectory includes powered subarcs flown with max thrust and coasting subarc flown with zero thrust. Then, employing the found analytical solution, we study the rendezvous problem under the assumption that the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given. The main contribution of Part B is the development of analytical solutions for the powered subarcs, an important extension of the analytical solutions already available for the coasting subarcs. One consequence is that the entire optimal trajectory can be described analytically. Another consequence is that the optimal control problems degenerate into mathematical programming problems. A further consequence is that, vis-a-vis the optimal control formulation, the mathematical programming formulation reduces the CPU time by a factor of order 1000. Key words. Space trajectories, rendezvous, optimization, guidance, optimal control, calculus of

  20. Reactive power optimization strategy considering analytical impedance ratio

    NASA Astrophysics Data System (ADS)

    Wu, Zhongchao; Shen, Weibing; Liu, Jinming; Guo, Maoran; Zhang, Shoulin; Xu, Keqiang; Wang, Wanjun; Sui, Jinlong

    2017-05-01

    In this paper, considering the traditional reactive power optimization cannot realize the continuous voltage adjustment and voltage stability, a dynamic reactive power optimization strategy is proposed in order to achieve both the minimization of network loss and high voltage stability with wind power. Due to the fact that wind power generation is fluctuant and uncertain, electrical equipments such as transformers and shunt capacitors may be operated frequently in order to achieve minimization of network loss, which affect the lives of these devices. In order to solve this problem, this paper introduces the derivation process of analytical impedance ratio based on Thevenin equivalent. Thus, the multiple objective function is proposed to minimize the network loss and analytical impedance ratio. Finally, taking the improved IEEE 33-bus distribution system as example, the result shows that the movement of voltage control equipment has been reduced and network loss increment is controlled at the same time, which proves the applicable value of this strategy.

  1. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  2. Analytic solution to variance optimization with no short positions

    NASA Astrophysics Data System (ADS)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  3. Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays

    PubMed Central

    Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.

    2017-01-01

    Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034

  4. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  5. Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"

    NASA Astrophysics Data System (ADS)

    Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.

    2018-03-01

    In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.

  6. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  7. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    PubMed

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Analytical Optimization of the Net Residual Dispersion in SPM-Limited Dispersion-Managed Systems

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaosheng; Gao, Shiming; Tian, Yu; Yang, Changxi

    2006-05-01

    Dispersion management is an effective technique to suppress the nonlinear impairment in fiber transmission systems, which includes tuning the amounts of precompensation, residual dispersion per span (RDPS), and net residual dispersion (NRD) of the systems. For self-phase modulation (SPM)-limited systems, optimizing the NRD is necessary because it can greatly improve the system performance. In this paper, an analytical method is presented to optimize NRD for SPM-limited dispersion-managed systems. The method is based on the correlation between the nonlinear impairment and the output pulse broadening of SPM-limited systems; therefore, dispersion-managed systems can be optimized through minimizing the output single-pulse broadening. A set of expressions is derived to calculate the output pulse broadening of the SPM-limited dispersion-managed system, from which the analytical result of optimal NRD is obtained. Furthermore, with the expressions of pulse broadening, how the nonlinear impairment depends on the amounts of precompensation and RDPS can be revealed conveniently.

  9. Spatiotemporal and geometric optimization of sensor arrays for detecting analytes fluids

    DOEpatents

    Lewis, Nathan S.; Freund, Michael S.; Briglin, Shawn M.; Tokumaru, Phil; Martin, Charles R.; Mitchell, David T.

    2006-10-17

    Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.

  10. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  11. Optimizing an immersion ESL curriculum using analytic hierarchy process.

    PubMed

    Tang, Hui-Wen Vivian

    2011-11-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative importance of course criteria for the purpose of tailoring an optimal one-week immersion English as a second language (ESL) curriculum for elementary school students in a suburban county of Taiwan. The hierarchy model and AHP analysis utilized in the present study will be useful for resolving several important multi-criteria decision-making issues in planning and evaluating ESL programs. This study also offers valuable insights and provides a basis for further research in customizing ESL curriculum models for different student populations with distinct learning needs, goals, and socioeconomic backgrounds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Analytically optimal parameters of dynamic vibration absorber with negative stiffness

    NASA Astrophysics Data System (ADS)

    Shen, Yongjun; Peng, Haibo; Li, Xianghong; Yang, Shaopu

    2017-02-01

    In this paper the optimal parameters of a dynamic vibration absorber (DVA) with negative stiffness is analytically studied. The analytical solution is obtained by Laplace transform method when the primary system is subjected to harmonic excitation. The research shows there are still two fixed points independent of the absorber damping in the amplitude-frequency curve of the primary system when the system contains negative stiffness. Then the optimum frequency ratio and optimum damping ratio are respectively obtained based on the fixed-point theory. A new strategy is proposed to obtain the optimum negative stiffness ratio and make the system remain stable at the same time. At last the control performance of the presented DVA is compared with those of three existing typical DVAs, which were presented by Den Hartog, Ren and Sims respectively. The comparison results in harmonic and random excitation show that the presented DVA in this paper could not only reduce the peak value of the amplitude-frequency curve of the primary system significantly, but also broaden the efficient frequency range of vibration mitigation.

  13. Spatiotemporal and geometric optimization of sensor arrays for detecting analytes in fluids

    DOEpatents

    Lewis, Nathan S [La Canada, CA; Freund, Michael S [Winnipeg, CA; Briglin, Shawn S [Chittenango, NY; Tokumaru, Phillip [Moorpark, CA; Martin, Charles R [Gainesville, FL; Mitchell, David [Newtown, PA

    2009-09-29

    Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.

  14. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  15. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  16. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  17. Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.

  18. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command

  19. The analytical approach to optimization of active region structure of quantum dot laser

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  20. Analytical and experimental study of resonance ignition tubes

    NASA Technical Reports Server (NTRS)

    Stabinsky, L.

    1973-01-01

    The application of the gas-dynamic resonance concept was investigated in relation to ignition of rocket propulsion systems. Analytical studies were conducted to delineate the potential uses of resonance ignition in oxygen/hydrogen bipropellant and hydrazine monopropellant rocket engines. Experimental studies were made to: (1) optimize the resonance igniter configuration, and (2) evaluate the ignition characteristics when operating with low temperature oxygen and hydrogen at the inlet to the igniter.

  1. Optimization of turning process through the analytic flank wear modelling

    NASA Astrophysics Data System (ADS)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  2. Optimization of analytical and pre-analytical conditions for MALDI-TOF-MS human urine protein profiles.

    PubMed

    Calvano, C D; Aresta, A; Iacovone, M; De Benedetto, G E; Zambonin, C G; Battaglia, M; Ditonno, P; Rutigliano, M; Bettocchi, C

    2010-03-11

    Protein analysis in biological fluids, such as urine, by means of mass spectrometry (MS) still suffers for insufficient standardization in protocols for sample collection, storage and preparation. In this work, the influence of these variables on healthy donors human urine protein profiling performed by matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was studied. A screening of various urine sample pre-treatment procedures and different sample deposition approaches on the MALDI target was performed. The influence of urine samples storage time and temperature on spectral profiles was evaluated by means of principal component analysis (PCA). The whole optimized procedure was eventually applied to the MALDI-TOF-MS analysis of human urine samples taken from prostate cancer patients. The best results in terms of detected ions number and abundance in the MS spectra were obtained by using home-made microcolumns packed with hydrophilic-lipophilic balance (HLB) resin as sample pre-treatment method; this procedure was also less expensive and suitable for high throughput analyses. Afterwards, the spin coating approach for sample deposition on the MALDI target plate was optimized, obtaining homogenous and reproducible spots. Then, PCA indicated that low storage temperatures of acidified and centrifuged samples, together with short handling time, allowed to obtain reproducible profiles without artifacts contribution due to experimental conditions. Finally, interesting differences were found by comparing the MALDI-TOF-MS protein profiles of pooled urine samples of healthy donors and prostate cancer patients. The results showed that analytical and pre-analytical variables are crucial for the success of urine analysis, to obtain meaningful and reproducible data, even if the intra-patient variability is very difficult to avoid. It has been proven how pooled urine samples can be an interesting way to make easier the comparison between

  3. Analytic model for ultrasound energy receivers and their optimal electric loads

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  4. Understanding Business Analytics Success and Impact: A Qualitative Study

    ERIC Educational Resources Information Center

    Parks, Rachida F.; Thambusamy, Ravi

    2017-01-01

    Business analytics is believed to be a huge boon for organizations since it helps offer timely insights over the competition, helps optimize business processes, and helps generate growth and innovation opportunities. As organizations embark on their business analytics initiatives, many strategic questions, such as how to operationalize business…

  5. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  6. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual

  7. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  8. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  9. Analytic study of orbiter landing profiles

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1981-01-01

    A broad survey of possible orbiter landing configurations was made with specific goals of defining boundaries for the landing task. The results suggest that the center of the corridors between marginal and routine represents a more or less optimal preflare condition for regular operations. Various constraints used to define the boundaries are based largely on qualitative judgements from earlier flight experience with the X-15 and lifting body research aircraft. The results should serve as useful background for expanding and validating landing simulation programs. The analytic approach offers a particular advantage in identifying trends due to the systematic variation of factors such as vehicle weight, load factor, approach speed, and aim point. Limitations such as a constant load factor during the flare and using a fixed gear deployment time interval, can be removed by increasing the flexibility of the computer program. This analytic definition of landing profiles of the orbiter may suggest additional studies, includin more configurations or more comparisons of landing profiles within and beyond the corridor boundaries.

  10. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  11. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  12. Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium

    PubMed Central

    Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter

    2015-01-01

    Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634

  13. Analytical Model and Optimized Design of Power Transmitting Coil for Inductively Coupled Endoscope Robot.

    PubMed

    Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai

    2016-04-01

    A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%.

  14. Statistical and optimal learning with applications in business analytics

    NASA Astrophysics Data System (ADS)

    Han, Bin

    Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization. We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs. We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%. Given these findings, when new scenario arises, however, new data need to

  15. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  17. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    NASA Astrophysics Data System (ADS)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  18. Analytical and experimental studies of an optimum multisegment phased liner noise suppression concept

    NASA Technical Reports Server (NTRS)

    Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.

    1976-01-01

    Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.

  19. Analytical insights into optimality and resonance in fish swimming

    PubMed Central

    Kohannim, Saba; Iwasaki, Tetsuya

    2014-01-01

    This paper provides analytical insights into the hypothesis that fish exploit resonance to reduce the mechanical cost of swimming. A simple body–fluid fish model, representing carangiform locomotion, is developed. Steady swimming at various speeds is analysed using optimal gait theory by minimizing bending moment over tail movements and stiffness, and the results are shown to match with data from observed swimming. Our analysis indicates the following: thrust–drag balance leads to the Strouhal number being predetermined based on the drag coefficient and the ratio of wetted body area to cross-sectional area of accelerated fluid. Muscle tension is reduced when undulation frequency matches resonance frequency, which maximizes the ratio of tail-tip velocity to bending moment. Finally, hydrodynamic resonance determines tail-beat frequency, whereas muscle stiffness is actively adjusted, so that overall body–fluid resonance is exploited. PMID:24430125

  20. Integration of fuzzy analytic hierarchy process and probabilistic dynamic programming in formulating an optimal fleet management model

    NASA Astrophysics Data System (ADS)

    Teoh, Lay Eng; Khoo, Hooi Ling

    2013-09-01

    This study deals with two major aspects of airlines, i.e. supply and demand management. The aspect of supply focuses on the mathematical formulation of an optimal fleet management model to maximize operational profit of the airlines while the aspect of demand focuses on the incorporation of mode choice modeling as parts of the developed model. The proposed methodology is outlined in two-stage, i.e. Fuzzy Analytic Hierarchy Process is first adopted to capture mode choice modeling in order to quantify the probability of probable phenomena (for aircraft acquisition/leasing decision). Then, an optimization model is developed as a probabilistic dynamic programming model to determine the optimal number and types of aircraft to be acquired and/or leased in order to meet stochastic demand during the planning horizon. The findings of an illustrative case study show that the proposed methodology is viable. The results demonstrate that the incorporation of mode choice modeling could affect the operational profit and fleet management decision of the airlines at varying degrees.

  1. Analytical Dimensional Reduction of a Fuel Optimal Powered Descent Subproblem

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Bishop, Robert H.

    2010-01-01

    Current renewed interest in exploration of the moon, Mars, and other planetary objects is driving technology development in many fields of space system design. In particular, there is a desire to land both robotic and human missions on the moon and elsewhere. The landing guidance system must be able to deliver the vehicle to a desired soft landing while meeting several constraints necessary for the safety of the vehicle. Due to performance limitations of current launch vehicles, it is desired to minimize the amount of fuel used. In addition, the landing site may change in real-time in order to avoid previously undetected hazards which become apparent during the landing maneuver. This complicated maneuver can be broken into simpler subproblems that bound the full problem. One such subproblem is to find a minimum-fuel landing solution that meets constraints on the initial state, final state, and bounded thrust acceleration magnitude. With the assumptions of constant gravity and negligible atmosphere, the form of the optimal steering law is known, and the equations of motion can be integrated analytically, resulting in a system of five equations in five unknowns. It is shown that this system of equations can be reduced analytically to two equations in two unknowns. With an additional assumption of constant thrust acceleration magnitude, this system can be reduced further to one equation in one unknown. It is shown that these unknowns can be bounded analytically. An algorithm is developed to quickly and reliably solve the resulting one-dimensional bounded search, and it is used as a real-time guidance applied to a lunar landing test case.

  2. The Framework of Intervention Engine Based on Learning Analytics

    ERIC Educational Resources Information Center

    Sahin, Muhittin; Yurdugül, Halil

    2017-01-01

    Learning analytics primarily deals with the optimization of learning environments and the ultimate goal of learning analytics is to improve learning and teaching efficiency. Studies on learning analytics seem to have been made in the form of adaptation engine and intervention engine. Adaptation engine studies are quite widespread, but intervention…

  3. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  4. Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory

    PubMed Central

    Parpia, Zaheer A.; Kelso, David M.

    2010-01-01

    Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793

  5. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  6. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    NASA Astrophysics Data System (ADS)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  7. An analytical optimization model for infrared image enhancement via local context

    NASA Astrophysics Data System (ADS)

    Xu, Yongjian; Liang, Kun; Xiong, Yiru; Wang, Hui

    2017-12-01

    The requirement for high-quality infrared images is constantly increasing in both military and civilian areas, and it is always associated with little distortion and appropriate contrast, while infrared images commonly have some shortcomings such as low contrast. In this paper, we propose a novel infrared image histogram enhancement algorithm based on local context. By constraining the enhanced image to have high local contrast, a regularized analytical optimization model is proposed to enhance infrared images. The local contrast is determined by evaluating whether two intensities are neighbors and calculating their differences. The comparison on 8-bit images shows that the proposed method can enhance the infrared images with more details and lower noise.

  8. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  9. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  10. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  12. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  13. Optimality study of a gust alleviation system for light wing-loading STOL aircraft

    NASA Technical Reports Server (NTRS)

    Komoda, M.

    1976-01-01

    An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.

  14. Repeated applications of a transdermal patch: analytical solution and optimal control of the delivery rate.

    PubMed

    Simon, L

    2007-10-01

    The integral transform technique was implemented to solve a mathematical model developed for percutaneous drug absorption. The model included repeated application and removal of a patch from the skin. Fick's second law of diffusion was used to study the transport of a medicinal agent through the vehicle and subsequent penetration into the stratum corneum. Eigenmodes and eigenvalues were computed and introduced into an inversion formula to estimate the delivery rate and the amount of drug in the vehicle and the skin. A dynamic programming algorithm calculated the optimal doses necessary to achieve a desired transdermal flux. The analytical method predicted profiles that were in close agreement with published numerical solutions and provided an automated strategy to perform therapeutic drug monitoring and control.

  15. Fabrication of paper-based analytical devices optimized by central composite design.

    PubMed

    Hamedpour, Vahid; Leardi, Riccardo; Suzuki, Koji; Citterio, Daniel

    2018-04-30

    In this work, an application of a design of experiments approach for the optimization of an isoniazid assay on a single-area inkjet-printed paper-based analytical device (PAD) is described. For this purpose, a central composite design was used for evaluation of the effect of device geometry and amount of assay reagents on the efficiency of the proposed device. The factors of interest were printed length, width, and sampling volume as factors related to device geometry, and amounts of the assay reagents polyvinyl alcohol (PVA), NH4OH, and AgNO3. Deposition of the assay reagents was performed by a thermal inkjet printer. The colorimetric assay mechanism of this device is based on the chemical interaction of isoniazid, ammonium hydroxide, and PVA with silver ions to induce the formation of yellow silver nanoparticles (AgNPs). The in situ-formed AgNPs can be easily detected by the naked eye or with a simple flat-bed scanner. Under optimal conditions, the calibration curve was linear in the isoniazid concentration range 0.03-10 mmol L-1 with a relative standard deviation of 3.4% (n = 5 for determination of 1.0 mmol L-1). Finally, the application of the proposed device for isoniazid determination in pharmaceutical preparations produced satisfactory results.

  16. Analytic Shielding Optimization to Reduce Crew Exposure to Ionizing Radiation Inside Space Vehicles

    NASA Technical Reports Server (NTRS)

    Gaza, Razvan; Cooper, Tim P.; Hanzo, Arthur; Hussein, Hesham; Jarvis, Kandy S.; Kimble, Ryan; Lee, Kerry T.; Patel, Chirag; Reddell, Brandon D.; Stoffle, Nicholas; hide

    2009-01-01

    A sustainable lunar architecture provides capabilities for leveraging out-of-service components for alternate uses. Discarded architecture elements may be used to provide ionizing radiation shielding to the crew habitat in case of a Solar Particle Event. The specific location relative to the vehicle where the additional shielding mass is placed, as corroborated with particularities of the vehicle design, has a large influence on protection gain. This effect is caused by the exponential- like decrease of radiation exposure with shielding mass thickness, which in turn determines that the most benefit from a given amount of shielding mass is obtained by placing it so that it preferentially augments protection in under-shielded areas of the vehicle exposed to the radiation environment. A novel analytic technique to derive an optimal shielding configuration was developed by Lockheed Martin during Design Analysis Cycle 3 (DAC-3) of the Orion Crew Exploration Vehicle (CEV). [1] Based on a detailed Computer Aided Design (CAD) model of the vehicle including a specific crew positioning scenario, a set of under-shielded vehicle regions can be identified as candidates for placement of additional shielding. Analytic tools are available to allow capturing an idealized supplemental shielding distribution in the CAD environment, which in turn is used as a reference for deriving a realistic shielding configuration from available vehicle components. While the analysis referenced in this communication applies particularly to the Orion vehicle, the general method can be applied to a large range of space exploration vehicles, including but not limited to lunar and Mars architecture components. In addition, the method can be immediately applied for optimization of radiation shielding provided to sensitive electronic components.

  17. What if Learning Analytics Were Based on Learning Science?

    ERIC Educational Resources Information Center

    Marzouk, Zahia; Rakovic, Mladen; Liaqat, Amna; Vytasek, Jovita; Samadi, Donya; Stewart-Alonso, Jason; Ram, Ilana; Woloshen, Sonya; Winne, Philip H.; Nesbit, John C.

    2016-01-01

    Learning analytics are often formatted as visualisations developed from traced data collected as students study in online learning environments. Optimal analytics inform and motivate students' decisions about adaptations that improve their learning. We observe that designs for learning often neglect theories and empirical findings in learning…

  18. Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur

    2013-09-01

    Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)], 10.1063/1.3665134 are presented. The OMP3 method is applied to problematic chemical systems with challenging electronic structures. The performance of the OMP3 method is compared with those of canonical second-order Møller-Plesset perturbation theory (MP2), third-order Møller-Plesset perturbation theory (MP3), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] for investigating equilibrium geometries, vibrational frequencies, and open-shell reaction energies. For bond lengths, the performance of OMP3 is in between those of MP3 and CCSD. For harmonic vibrational frequencies, the OMP3 method significantly eliminates the singularities arising from the abnormal response contributions observed for MP3 in case of symmetry-breaking problems, and provides noticeably improved vibrational frequencies for open-shell molecules. For open-shell reaction energies, OMP3 exhibits a better performance than MP3 and CCSD as in case of barrier heights and radical stabilization energies. As discussed in previous studies, the OMP3 method is several times faster than CCSD in energy computations. Further, in analytic gradient computations for the CCSD method one needs to solve λ-amplitude equations, however for OMP3 one does not since λ _{ab}^{ij(1)} = t_{ij}^{ab(1)} and λ _{ab}^{ij(2)} = t_{ij}^{ab(2)}. Additionally, one needs to solve orbital Z-vector equations for CCSD, but for OMP3 orbital response contributions are zero owing to the stationary property of OMP3. Overall, for analytic gradient computations the OMP3 method is several times less expensive than CCSD (roughly ˜4-6 times). Considering the balance of computational cost and accuracy we conclude that the OMP3 method emerges as a very useful tool for the study of electronically challenging chemical

  19. Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory.

    PubMed

    Bozkaya, Uğur

    2013-09-14

    Analytic energy gradients for the orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)] are presented. The OMP3 method is applied to problematic chemical systems with challenging electronic structures. The performance of the OMP3 method is compared with those of canonical second-order Møller-Plesset perturbation theory (MP2), third-order Møller-Plesset perturbation theory (MP3), coupled-cluster singles and doubles (CCSD), and coupled-cluster singles and doubles with perturbative triples [CCSD(T)] for investigating equilibrium geometries, vibrational frequencies, and open-shell reaction energies. For bond lengths, the performance of OMP3 is in between those of MP3 and CCSD. For harmonic vibrational frequencies, the OMP3 method significantly eliminates the singularities arising from the abnormal response contributions observed for MP3 in case of symmetry-breaking problems, and provides noticeably improved vibrational frequencies for open-shell molecules. For open-shell reaction energies, OMP3 exhibits a better performance than MP3 and CCSD as in case of barrier heights and radical stabilization energies. As discussed in previous studies, the OMP3 method is several times faster than CCSD in energy computations. Further, in analytic gradient computations for the CCSD method one needs to solve λ-amplitude equations, however for OMP3 one does not since λ(ab)(ij(1))=t(ij)(ab(1)) and λ(ab)(ij(2))=t(ij)(ab(2)). Additionally, one needs to solve orbital Z-vector equations for CCSD, but for OMP3 orbital response contributions are zero owing to the stationary property of OMP3. Overall, for analytic gradient computations the OMP3 method is several times less expensive than CCSD (roughly ~4-6 times). Considering the balance of computational cost and accuracy we conclude that the OMP3 method emerges as a very useful tool for the study of electronically challenging chemical systems.

  20. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  1. Communication: Analytical optimal pulse shapes obtained with the aid of genetic algorithms: Controlling the photoisomerization yield of retinal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, R. D., E-mail: rdguerrerom@unal.edu.co; Arango, C. A., E-mail: caarango@icesi.edu.co; Reyes, A., E-mail: areyesv@unal.edu.co

    We recently proposed a Quantum Optimal Control (QOC) method constrained to build pulses from analytical pulse shapes [R. D. Guerrero et al., J. Chem. Phys. 143(12), 124108 (2015)]. This approach was applied to control the dissociation channel yields of the diatomic molecule KH, considering three potential energy curves and one degree of freedom. In this work, we utilized this methodology to study the strong field control of the cis-trans photoisomerization of 11-cis retinal. This more complex system was modeled with a Hamiltonian comprising two potential energy surfaces and two degrees of freedom. The resulting optimal pulse, made of 6 linearlymore » chirped pulses, was capable of controlling the population of the trans isomer on the ground electronic surface for nearly 200 fs. The simplicity of the pulse generated with our QOC approach offers two clear advantages: a direct analysis of the sequence of events occurring during the driven dynamics, and its reproducibility in the laboratory with current laser technologies.« less

  2. Optimization of the Water Volume in the Buckets of Pico Hydro Overshot Waterwheel by Analytical Method

    NASA Astrophysics Data System (ADS)

    Budiarso; Adanta, Dendy; Warjito; Siswantara, A. I.; Saputra, Pradhana; Dianofitra, Reza

    2018-03-01

    Rapid economic and population growth in Indonesia lead to increased energy consumption, including electricity needs. Pico hydro is considered as the right solution because the cost of investment and operational cost are fairly low. Additionally, Indonesia has many remote areas with high hydro-energy potential. The overshot waterwheel is one of technology that is suitable to be applied in remote areas due to ease of operation and maintenance. This study attempts to optimize bucket dimensions with the available conditions. In addition, the optimization also has a good impact on the amount of generated power because all available energy is utilized maximally. Analytical method is used to evaluate the volume of water contained in bucket overshot waterwheel. In general, there are two stages performed. First, calculation of the volume of water contained in each active bucket is done. If the amount total of water contained is less than the available discharge in active bucket, recalculation at the width of the wheel is done. Second, calculation of the torque of each active bucket is done to determine the power output. As the result, the mechanical power generated from the waterwheel is 305 Watts with the efficiency value of 28%.

  3. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  4. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    PubMed

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  5. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    NASA Astrophysics Data System (ADS)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  6. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  7. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  8. Analytical optimal controls for the state constrained addition and removal of cryoprotective agents

    PubMed Central

    Chicone, Carmen C.; Critser, John K.

    2014-01-01

    Cryobiology is a field with enormous scientific, financial and even cultural impact. Successful cryopreservation of cells and tissues depends on the equilibration of these materials with high concentrations of permeating chemicals (CPAs) such as glycerol or 1,2 propylene glycol. Because cells and tissues are exposed to highly anisosmotic conditions, the resulting gradients cause large volume fluctuations that have been shown to damage cells and tissues. On the other hand, there is evidence that toxicity to these high levels of chemicals is time dependent, and therefore it is ideal to minimize exposure time as well. Because solute and solvent flux is governed by a system of ordinary differential equations, CPA addition and removal from cells is an ideal context for the application of optimal control theory. Recently, we presented a mathematical synthesis of the optimal controls for the ODE system commonly used in cryobiology in the absence of state constraints and showed that controls defined by this synthesis were optimal. Here we define the appropriate model, analytically extend the previous theory to one encompassing state constraints, and as an example apply this to the critical and clinically important cell type of human oocytes, where current methodologies are either difficult to implement or have very limited success rates. We show that an enormous increase in equilibration efficiency can be achieved under the new protocols when compared to classic protocols, potentially allowing a greatly increased survival rate for human oocytes, and pointing to a direction for the cryopreservation of many other cell types. PMID:22527943

  9. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng

    2011-05-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.

  10. Optimal control for Malaria disease through vaccination

    NASA Astrophysics Data System (ADS)

    Munzir, Said; Nasir, Muhammad; Ramli, Marwan

    2018-01-01

    Malaria is a disease caused by an amoeba (single-celled animal) type of plasmodium where anopheles mosquito serves as the carrier. This study examines the optimal control problem of malaria disease spread based on Aron and May (1982) SIR type models and seeks the optimal solution by minimizing the prevention of the spreading of malaria by vaccine. The aim is to investigate optimal control strategies on preventing the spread of malaria by vaccination. The problem in this research is solved using analytical approach. The analytical method uses the Pontryagin Minimum Principle with the symbolic help of MATLAB software to obtain optimal control result and to analyse the spread of malaria with vaccination control.

  11. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  12. Design and Optimization of AlN based RF MEMS Switches

    NASA Astrophysics Data System (ADS)

    Hasan Ziko, Mehadi; Koel, Ants

    2018-05-01

    Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.

  13. Analytical challenges in sports drug testing.

    PubMed

    Thevis, Mario; Krug, Oliver; Geyer, Hans; Walpurgis, Katja; Baume, Norbert; Thomas, Andreas

    2018-03-01

    Analytical chemistry represents a central aspect of doping controls. Routine sports drug testing approaches are primarily designed to address the question whether a prohibited substance is present in a doping control sample and whether prohibited methods (for example, blood transfusion or sample manipulation) have been conducted by an athlete. As some athletes have availed themselves of the substantial breadth of research and development in the pharmaceutical arena, proactive and preventive measures are required such as the early implementation of new drug candidates and corresponding metabolites into routine doping control assays, even though these drug candidates are to date not approved for human use. Beyond this, analytical data are also cornerstones of investigations into atypical or adverse analytical findings, where the overall picture provides ample reason for follow-up studies. Such studies have been of most diverse nature, and tailored approaches have been required to probe hypotheses and scenarios reported by the involved parties concerning the plausibility and consistency of statements and (analytical) facts. In order to outline the variety of challenges that doping control laboratories are facing besides providing optimal detection capabilities and analytical comprehensiveness, selected case vignettes involving the follow-up of unconventional adverse analytical findings, urine sample manipulation, drug/food contamination issues, and unexpected biotransformation reactions are thematized.

  14. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in

  15. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved

  16. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  17. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  18. Trends in Process Analytical Technology: Present State in Bioprocessing.

    PubMed

    Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian

    2017-08-04

    Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.

  19. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Chao Chung, E-mail: ho919@pchome.com.tw

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This ismore » because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms.« less

  1. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process.

    PubMed

    Ho, Chao Chung

    2011-07-01

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This is because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Analytical optimization of digital subtraction mammography with contrast medium using a commercial unit.

    PubMed

    Rosado-Méndez, I; Palma, B A; Brandan, M E

    2008-12-01

    Contrast-medium-enhanced digital mammography (CEDM) is an image subtraction technique which might help unmasking lesions embedded in very dense breasts. Previous works have stated the feasibility of CEDM and the imperative need of radiological optimization. This work presents an extension of a former analytical formalism to predict contrast-to-noise ratio (CNR) in subtracted mammograms. The goal is to optimize radiological parameters available in a clinical mammographic unit (x-ray tube anode/filter combination, voltage, and loading) by maximizing CNR and minimizing total mean glandular dose (D(gT)), simulating the experimental application of an iodine-based contrast medium and the image subtraction under dual-energy nontemporal, and single- or dual-energy temporal modalities. Total breast-entrance air kerma is limited to a fixed 8.76 mGy (1 R, similar to screening studies). Mathematical expressions obtained from the formalism are evaluated using computed mammographic x-ray spectra attenuated by an adipose/glandular breast containing an elongated structure filled with an iodinated solution in various concentrations. A systematic study of contrast, its associated variance, and CNR for different spectral combinations is performed, concluding in the proposal of optimum x-ray spectra. The linearity between contrast in subtracted images and iodine mass thickness is proven, including the determination of iodine visualization limits based on Rose's detection criterion. Finally, total breast-entrance air kerma is distributed between both images in various proportions in order to maximize the figure of merit CNR2/D(gT). Predicted results indicate the advantage of temporal subtraction (either single- or dual-energy modalities) with optimum parameters corresponding to high-voltage, strongly hardened Rh/Rh spectra. For temporal techniques, CNR was found to depend mostly on the energy of the iodinated image, and thus reduction in D(gT) could be achieved if the spectral energy

  3. Optimization of complex slater-type functions with analytic derivative methods for describing photoionization differential cross sections.

    PubMed

    Matsuzaki, Rei; Yabushita, Satoshi

    2017-05-05

    The complex basis function (CBF) method applied to various atomic and molecular photoionization problems can be interpreted as an L2 method to solve the driven-type (inhomogeneous) Schrödinger equation, whose driven term being dipole operator times the initial state wave function. However, efficient basis functions for representing the solution have not fully been studied. Moreover, the relation between their solution and that of the ordinary Schrödinger equation has been unclear. For these reasons, most previous applications have been limited to total cross sections. To examine the applicability of the CBF method to differential cross sections and asymmetry parameters, we show that the complex valued solution to the driven-type Schrödinger equation can be variationally obtained by optimizing the complex trial functions for the frequency dependent polarizability. In the test calculations made for the hydrogen photoionization problem with five or six complex Slater-type orbitals (cSTOs), their complex valued expansion coefficients and the orbital exponents have been optimized with the analytic derivative method. Both the real and imaginary parts of the solution have been obtained accurately in a wide region covering typical molecular regions. Their phase shifts and asymmetry parameters are successfully obtained by extrapolating the CBF solution from the inner matching region to the asymptotic region using WKB method. The distribution of the optimized orbital exponents in the complex plane is explained based on the close connection between the CBF method and the driven-type equation method. The obtained information is essential to constructing the appropriate basis sets in future molecular applications. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Hybrid computational and experimental approach for the study and optimization of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1998-05-01

    Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.

  5. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  6. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  7. Analytically solvable chaotic oscillator based on a first-order filter.

    PubMed

    Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N

    2016-02-01

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.

  8. Analytically solvable chaotic oscillator based on a first-order filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.

    2016-02-15

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less

  9. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  10. A decision-analytic approach to the optimal allocation of resources for endangered species consultation

    USGS Publications Warehouse

    Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.

    2011-01-01

    The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal

  11. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  12. Analytical study of comet nucleus samples

    NASA Technical Reports Server (NTRS)

    Albee, A. L.

    1989-01-01

    Analytical procedures for studying and handling frozen (130 K) core samples of comet nuclei are discussed. These methods include neutron activation analysis, x ray fluorescent analysis and high resolution mass spectroscopy.

  13. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    NASA Astrophysics Data System (ADS)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  14. Development and optimization of an analytical system for volatile organic compound analysis coming from the heating of interstellar/cometary ice analogues.

    PubMed

    Abou Mrad, Ninette; Duvernay, Fabrice; Theulé, Patrice; Chiavassa, Thierry; Danger, Grégoire

    2014-08-19

    This contribution presents an original analytical system for studying volatile organic compounds (VOC) coming from the heating and/or irradiation of interstellar/cometary ice analogues (VAHIIA system) through laboratory experiments. The VAHIIA system brings solutions to three analytical constraints regarding chromatography analysis: the low desorption kinetics of VOC (many hours) in the vacuum chamber during laboratory experiments, the low pressure under which they sublime (10(-9) mbar), and the presence of water in ice analogues. The VAHIIA system which we developed, calibrated, and optimized is composed of two units. The first is a preconcentration unit providing the VOC recovery. This unit is based on a cryogenic trapping which allows VOC preconcentration and provides an adequate pressure allowing their subsequent transfer to an injection unit. The latter is a gaseous injection unit allowing the direct injection into the GC-MS of the VOC previously transferred from the preconcentration unit. The feasibility of the online transfer through this interface is demonstrated. Nanomoles of VOC can be detected with the VAHIIA system, and the variability in replicate measurements is lower than 13%. The advantages of the GC-MS in comparison to infrared spectroscopy are pointed out, the GC-MS allowing an unambiguous identification of compounds coming from complex mixtures. Beyond the application to astrophysical subjects, these analytical developments can be used for all systems requiring vacuum/cryogenic environments.

  15. Analytical optimization of demand management strategies across all urban water use sectors

    NASA Astrophysics Data System (ADS)

    Friedman, Kenneth; Heaney, James P.; Morales, Miguel; Palenchar, John

    2014-07-01

    An effective urban water demand management program can greatly influence both peak and average demand and therefore long-term water supply and infrastructure planning. Although a theoretical framework for evaluating residential indoor demand management has been well established, little has been done to evaluate other water use sectors such as residential irrigation in a compatible manner for integrating these results into an overall solution. This paper presents a systematic procedure to evaluate the optimal blend of single family residential irrigation demand management strategies to achieve a specified goal based on performance functions derived from parcel level tax assessor's data linked to customer level monthly water billing data. This framework is then generalized to apply to any urban water sector, as exponential functions can be fit to all resulting cumulative water savings functions. Two alternative formulations are presented: maximize net benefits, or minimize total costs subject to satisfying a target water savings. Explicit analytical solutions are presented for both formulations based on appropriate exponential best fits of performance functions. A direct result of this solution is the dual variable which represents the marginal cost of water saved at a specified target water savings goal. A case study of 16,303 single family irrigators in Gainesville Regional Utilities utilizing high quality tax assessor and monthly billing data along with parcel level GIS data provide an illustrative example of these techniques. Spatial clustering of targeted homes can be easily performed in GIS to identify priority demand management areas.

  16. Semi-analytic valuation of stock loans with finite maturity

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoping; Putri, Endah R. M.

    2015-10-01

    In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.

  17. Orbital-optimized MP2.5 and its analytic gradients: Approaching CCSD(T) quality for noncovalent interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozkaya, Uğur, E-mail: ugur.bozkaya@atauni.edu.tr; Center for Computational Molecular Science and Technology, School of Chemistry and Biochemistry, and School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332; Sherrill, C. David

    2014-11-28

    Orbital-optimized MP2.5 [or simply “optimized MP2.5,” OMP2.5, for short] and its analytic energy gradients are presented. The cost of the presented method is as much as that of coupled-cluster singles and doubles (CCSD) [O(N{sup 6}) scaling] for energy computations. However, for analytic gradient computations the OMP2.5 method is only half as expensive as CCSD because there is no need to solve λ{sub 2}-amplitude equations for OMP2.5. The performance of the OMP2.5 method is compared with that of the standard second-order Møller–Plesset perturbation theory (MP2), MP2.5, CCSD, and coupled-cluster singles and doubles with perturbative triples (CCSD(T)) methods for equilibrium geometries, hydrogenmore » transfer reactions between radicals, and noncovalent interactions. For bond lengths of both closed and open-shell molecules, the OMP2.5 method improves upon MP2.5 and CCSD by 38%–43% and 31%–28%, respectively, with Dunning's cc-pCVQZ basis set. For complete basis set (CBS) predictions of hydrogen transfer reaction energies, the OMP2.5 method exhibits a substantially better performance than MP2.5, providing a mean absolute error of 1.1 kcal mol{sup −1}, which is more than 10 times lower than that of MP2.5 (11.8 kcal mol{sup −1}), and comparing to MP2 (14.6 kcal mol{sup −1}) there is a more than 12-fold reduction in errors. For noncovalent interaction energies (at CBS limits), the OMP2.5 method maintains the very good performance of MP2.5 for closed-shell systems, and for open-shell systems it significantly outperforms MP2.5 and CCSD, and approaches CCSD(T) quality. The MP2.5 errors decrease by a factor of 5 when the optimized orbitals are used for open-shell noncovalent interactions, and comparing to CCSD there is a more than 3-fold reduction in errors. Overall, the present application results indicate that the OMP2.5 method is very promising for open-shell noncovalent interactions and other chemical systems with difficult electronic structures.« less

  18. Many-objective optimization and visual analytics reveal key trade-offs for London's water supply

    NASA Astrophysics Data System (ADS)

    Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.

    2015-12-01

    In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.

  19. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  20. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  1. Scalable Rapidly Deployable Convex Optimization for Data Analytics

    DTIC Science & Technology

    SOCPs , SDPs, exponential cone programs, and power cone programs. CVXPY supports basic methods for distributed optimization, on...multiple heterogenous platforms. We have also done basic research in various application areas , using CVXPY , to demonstrate its usefulness. See attached report for publication information....Over the period of the contract we have developed the full stack for wide use of convex optimization, in machine learning and many other areas .

  2. Analytical study of sandwich structures using Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.

    2017-01-01

    This paper presents an analytical study of sandwich structures. In this study, the Euler-Bernoulli beam equation is solved analytically for a four-point bending problem. Appropriate initial and boundary conditions are specified to enclose the problem. In addition, the balance coefficient is calculated and the Rule of Mixtures is applied. The focus of this study is to determine the effective material properties and geometric features such as the moment of inertia of a sandwich beam. The effective parameters help in the development of a generic analytical correlation for complex sandwich structures from the perspective of four-point bending calculations. The main outcomes of these analytical calculations are the lateral displacements and longitudinal stresses for each particular material in the sandwich structure.

  3. Extraction optimization and identification of anthocyanins from Nitraria tangutorun Bobr. seed meal and establishment of a green analytical method of anthocyanins.

    PubMed

    Sang, Jun; Sang, Jie; Ma, Qun; Hou, Xiao-Fang; Li, Cui-Qin

    2017-03-01

    This study aimed to extract and identify anthocyanins from Nitraria tangutorun Bobr. seed meal and establish a green analytical method of anthocyanins. Ultrasound-assisted extraction of anthocyanins from N. tangutorun seed meal was optimized using response surface methodology. Extraction at 70°C for 32.73 min using 51.15% ethanol rendered an extract with 65.04mg/100g of anthocyanins and 947.39mg/100g of polyphenols. An in vitro antioxidant assay showed that the extract exhibited a potent DPPH radical-scavenging capacity. Eight anthocyanins in N. tangutorun seed meal were identified by HPLC-MS, and the main anthocyanin was cyanidin-3-O-(trans-p-coumaroyl)-diglucoside (18.17mg/100g). A green HPLC-DAD method was developed to analyse anthocyanins. A mixtures of ethanol and a 5% (v/v) formic acid aqueous solution at a 20:80 (v/v) ratio was used as the optimized mobile phase. The method was accurate, stable and reliable and could be used to investigate anthocyanins from N. tangutorun seed meal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Optimization study on inductive-resistive circuit for broadband piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Tan, Ting; Yan, Zhimiao

    2017-03-01

    The performance of cantilever-beam piezoelectric energy harvester is usually analyzed with pure resistive circuit. The optimal performance of such a vibration-based energy harvesting system is limited by narrow bandwidth around its modified natural frequency. For broadband piezoelectric energy harvesting, series and parallel inductive-resistive circuits are introduced. The electromechanical coupled distributed parameter models for such systems under harmonic base excitations are decoupled with modified natural frequency and electrical damping to consider the coupling effect. Analytical solutions of the harvested power and tip displacement for the electromechanical decoupled model are confirmed with numerical solutions for the coupled model. The optimal performance of piezoelectric energy harvesting with inductive-resistive circuits is revealed theoretically as constant maximal power at any excitation frequency. This is achieved by the scenarios of matching the modified natural frequency with the excitation frequency and equating the electrical damping to the mechanical damping. The inductance and load resistance should be simultaneously tuned to their optimal values, which may not be applicable for very high electromechanical coupling systems when the excitation frequency is higher than their natural frequencies. With identical optimal performance, the series inductive-resistive circuit is recommended for relatively small load resistance, while the parallel inductive-resistive circuit is suggested for relatively large load resistance. This study provides a simplified optimization method for broadband piezoelectric energy harvesters with inductive-resistive circuits.

  5. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  6. Recent Studies in Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Garcia, Rafael Ferro

    2008-01-01

    Functional Analytic Psychotherapy (FAP), based on the principles of radical behaviorism, emphasizes the impact of eventualities that occur during therapeutic sessions, the therapist-client interaction context, functional equivalence between environments, natural reinforcement and shaping by the therapist. This paper reviews recent studies of FAP…

  7. Learning Analytics: Potential for Enhancing School Library Programs

    ERIC Educational Resources Information Center

    Boulden, Danielle Cadieux

    2015-01-01

    Learning analytics has been defined as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The potential use of data and learning analytics in educational contexts has caught the attention of educators and…

  8. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  9. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  10. 2002 railroad employee fatalities : an analytical study

    DOT National Transportation Integrated Search

    2005-02-01

    2002 Railroad Employee Fatalities: An Analytical Study, which is designed to promote and : enhance awareness of many unsafe behaviors and conditions that typically contribute to : railroad employee fatalities. By furthering our understanding of...

  11. Exergy optimization in a steady moving bed heat exchanger.

    PubMed

    Soria-Verdugo, A; Almendros-Ibáñez, J A; Ruiz-Rivas, U; Santana, D

    2009-04-01

    This work provides an energy and exergy optimization analysis of a moving bed heat exchanger (MBHE). The exchanger is studied as a cross-flow heat exchanger where one of the phases is a moving granular medium. The optimal MBHE dimensions and the optimal particle diameter are obtained for a range of incoming fluid flow rates. The analyses are carried out over operation data of the exchanger obtained in two ways: a numerical simulation of the steady-state problem and an analytical solution of the simplified equations, neglecting the conduction terms. The numerical simulation considers, for the solid, the convection heat transfer to the fluid and the diffusion term in both directions, and for the fluid only the convection heat transfer to the solid. The results are compared with a well-known analytical solution (neglecting conduction effects) for the temperature distribution in the exchanger. Next, the analytical solution is used to derive an expression for the exergy destruction. The optimal length of the MBHE depends mainly on the flow rate and does not depend on particle diameter unless they become very small (thus increasing sharply the pressure drop). The exergy optimal length is always smaller than the thermal one, although the difference is itself small.

  12. 1998 railroad employee fatalities : an analytical study

    DOT National Transportation Integrated Search

    2003-11-01

    "1998 Railroad Employee Fatalities: An Analytical Study," is designed to promote and enhance awareness of many unsafe behaviors and conditions that typically contribute to railroad employee fatalities. By furthering our understanding of the causes of...

  13. Optimization of analytical parameters for inferring relationships among Escherichia coli isolates from repetitive-element PCR by maximizing correspondence with multilocus sequence typing data.

    PubMed

    Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S

    2006-09-01

    Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.

  14. An Analytic Approach for Optimal Geometrical Design of GaAs Nanowires for Maximal Light Harvesting in Photovoltaic Cells

    PubMed Central

    Wu, Dan; Tang, Xiaohong; Wang, Kai; Li, Xianqiang

    2017-01-01

    Semiconductor nanowires(NWs) with subwavelength scale diameters have demonstrated superior light trapping features, which unravel a new pathway for low cost and high efficiency future generation solar cells. Unlike other published work, a fully analytic design is for the first time proposed for optimal geometrical parameters of vertically-aligned GaAs NW arrays for maximal energy harvesting. Using photocurrent density as the light absorbing evaluation standard, 2 μm length NW arrays whose multiple diameters and periodicity are quantitatively identified achieving the maximal value of 29.88 mA/cm2 under solar illumination. It also turns out that our method has wide suitability for single, double and four different diameters of NW arrays for highest photon energy harvesting. To validate this analytical method, intensive numerical three-dimensional finite-difference time-domain simulations of the NWs’ light harvesting are also carried out. Compared with the simulation results, the predicted maximal photocurrent densities lie within 1.5% tolerance for all cases. Along with the high accuracy, through directly disclosing the exact geometrical dimensions of NW arrays, this method provides an effective and efficient route for high performance photovoltaic design. PMID:28425488

  15. Depth Optimization Study

    DOE Data Explorer

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  16. Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation

    NASA Astrophysics Data System (ADS)

    Christ, John A.; Goltz, Mark N.

    2002-05-01

    We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.

  17. Micro-focused ultrasonic solid-liquid extraction (muFUSLE) combined with HPLC and fluorescence detection for PAHs determination in sediments: optimization and linking with the analytical minimalism concept.

    PubMed

    Capelo, J L; Galesio, M M; Felisberto, G M; Vaz, C; Pessoa, J Costa

    2005-06-15

    Analytical minimalism is a concept that deals with the optimization of all stages of an analytical procedure so that it becomes less time, cost, sample, reagent and energy consuming. The guide-lines provided in the USEPA extraction method 3550B recommend the use of focused ultrasound (FU), i.e., probe sonication, for the solid-liquid extraction of Polycyclic Aromatic Hydrocarbons, PAHs, but ignore the principle of analytical minimalism. The problems related with the dead sonication zones, often present when high volumes are sonicated with probe, are also not addressed. In this work, we demonstrate that successful extraction and quantification of PAHs from sediments can be done with low sample mass (0.125g), low reagent volume (4ml), short sonication time (3min) and low sonication amplitude (40%). Two variables are here particularly taken into account for total extraction: (i) the design of the extraction vessel and (ii) the solvent used to carry out the extraction. Results showed PAHs recoveries (EPA priority list) ranged between 77 and 101%, accounting for more than 95% for most of the PAHs here studied, as compared with the values obtained after soxhlet extraction. Taking into account the results reported in this work we recommend a revision of the EPA guidelines for PAHs extraction from solid matrices with focused ultrasound, so that these match the analytical minimalism concept.

  18. Applying an analytical method to study neutron behavior for dosimetry

    NASA Astrophysics Data System (ADS)

    Shirazi, S. A. Mousavi

    2016-12-01

    In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.

  19. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  20. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  1. Seamless Digital Environment – Data Analytics Use Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna

    Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct themore » analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.« less

  2. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  3. Holistic versus Analytic Evaluation of EFL Writing: A Case Study

    ERIC Educational Resources Information Center

    Ghalib, Thikra K.; Al-Hattami, Abdulghani A.

    2015-01-01

    This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…

  4. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  5. Optimization study on structural analyses for the J-PARC mercury target vessel

    NASA Astrophysics Data System (ADS)

    Guan, Wenhai; Wakai, Eiichi; Naoe, Takashi; Kogawa, Hiroyuki; Wakui, Takashi; Haga, Katsuhiro; Takada, Hiroshi; Futakawa, Masatoshi

    2018-06-01

    The spallation neutron source at the Japan Proton Accelerator Research Complex (J-PARC) mercury target vessel is used for various materials science studies, work is underway to achieve stable operation at 1 MW. This is very important for enhancing the structural integrity and durability of the target vessel, which is being developed for 1 MW operation. In the present study, to reduce thermal stress and relax stress concentrations more effectively in the existing target vessel in J-PARC, an optimization approach called the Taguchi method (TM) is applied to thermo-mechanical analysis. The ribs and their relative parameters, as well as the thickness of the mercury vessel and shrouds, were selected as important design parameters for this investigation. According to the analytical results of 18 model types designed using the TM, the optimal design was determined. It is characterized by discrete ribs and a thicker vessel wall than the current design. The maximum thermal stresses in the mercury vessel and the outer shroud were reduced by 14% and 15%, respectively. Furthermore, it was indicated that variations in rib width, left/right rib intervals, and shroud thickness could influence the maximum thermal stress performance. It is therefore concluded that the TM was useful for optimizing the structure of the target vessel and to reduce the thermal stress in a small number of calculation cases.

  6. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  7. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  8. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  9. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  10. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  11. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  12. Behavior Analytic Contributions to the Study of Creativity

    ERIC Educational Resources Information Center

    Kubina, Richard M., Jr.; Morrison, Rebecca S.; Lee, David L.

    2006-01-01

    As researchers continue to study creativity, a behavior analytic perspective may provide new vistas by offering an additional perspective. Contemporary behavior analysis began with B. F. Skinner and offers a selectionist approach to the scientific investigation of creativity. Behavior analysis contributes to the study of creativity by…

  13. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  14. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  15. An analytical study of double bend achromat lattice.

    PubMed

    Fakhri, Ali Akbar; Kant, Pradeep; Singh, Gurnam; Ghodke, A D

    2015-03-01

    In a double bend achromat, Chasman-Green (CG) lattice represents the basic structure for low emittance synchrotron radiation sources. In the basic structure of CG lattice single focussing quadrupole (QF) magnet is used to form an achromat. In this paper, this CG lattice is discussed and an analytical relation is presented, showing the limitation of basic CG lattice to provide the theoretical minimum beam emittance in achromatic condition. To satisfy theoretical minimum beam emittance parameters, achromat having two, three, and four quadrupole structures is presented. In this structure, different arrangements of QF and defocusing quadruple (QD) are used. An analytical approach assuming quadrupoles as thin lenses has been followed for studying these structures. A study of Indus-2 lattice in which QF-QD-QF configuration in the achromat part has been adopted is also presented.

  16. An assessment of separable fluid connector system parameters to perform a connector system design optimization study

    NASA Technical Reports Server (NTRS)

    Prasthofer, W. P.

    1974-01-01

    The key to optimization of design where there are a large number of variables, all of which may not be known precisely, lies in the mathematical tool of dynamic programming developed by Bellman. This methodology can lead to optimized solutions to the design of critical systems in a minimum amount of time, even when there are a great number of acceptable configurations to be considered. To demonstrate the usefulness of dynamic programming, an analytical method is developed for evaluating the relationship among existing numerous connector designs to find the optimum configuration. The data utilized in the study were generated from 900 flanges designed for six subsystems of the S-1B stage of the Saturn 1B space carrier vehicle.

  17. Analytic theory of alternate multilayer gratings operating in single-order regime.

    PubMed

    Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan

    2017-07-10

    Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.

  18. Retail video analytics: an overview and survey

    NASA Astrophysics Data System (ADS)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  19. Fast UPLC/PDA determination of squalene in Sicilian P.D.O. pistachio from Bronte: Optimization of oil extraction method and analytical characterization.

    PubMed

    Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo

    2017-04-15

    A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL -1 level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg -1 . Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  1. Continuous Optimization on Constraint Manifolds

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.

  2. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  3. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  4. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  5. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  6. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    NASA Astrophysics Data System (ADS)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  7. Overcoming Barriers to Educational Analytics: How Systems Thinking and Pragmatism Can Help

    ERIC Educational Resources Information Center

    Macfadyen, Leah P.

    2017-01-01

    Learning technologies are now commonplace in education, and generate large volumes of educational data. Scholars have argued that analytics can and should be employed to optimize learning and learning environments. This article explores what is really meant by "analytics", describes the current best-known examples of institutional…

  8. Request Pattern, Pre-Analytical and Analytical Conditions of Urinalysis in Primary Care: Lessons from a One-Year Large-Scale Multicenter Study.

    PubMed

    Salinas, Maria; Lopez-Garrigos, Maite; Flores, Emilio; Leiva-Salinas, Carlos

    2018-06-01

    To study the urinalysis request, pre-analytical sample conditions, and analytical procedures. Laboratories were asked to provide the number of primary care urinalyses requested, and to fill out a questionnaire regarding pre-analytical conditions and analytical procedures. 110 laboratories participated in the study. 232.5 urinalyses/1,000 inhabitants were reported. 75.4% used the first morning urine. The sample reached the laboratory in less than 2 hours in 18.8%, between 2 - 4 hours in 78.3%, and between 4 - 6 hours in the remaining 2.9%. 92.5% combined the use of test strip and particle analysis, and only 7.5% used the strip exclusively. All participants except one performed automated particle analysis depending on strip results; in 16.2% the procedure was only manual. Urinalysis was highly requested. There was a lack of compliance with guidelines regarding time between micturition and analysis that usually involved the combination of strip followed by particle analysis.

  9. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  10. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  11. A Model for Developing Clinical Analytics Capacity: Closing the Loops on Outcomes to Optimize Quality.

    PubMed

    Eggert, Corinne; Moselle, Kenneth; Protti, Denis; Sanders, Dale

    2017-01-01

    Closed Loop Analytics© is receiving growing interest in healthcare as a term referring to information technology, local data and clinical analytics working together to generate evidence for improvement. The Closed Loop Analytics model consists of three loops corresponding to the decision-making levels of an organization and the associated data within each loop - Patients, Protocols, and Populations. The authors propose that each of these levels should utilize the same ecosystem of electronic health record (EHR) and enterprise data warehouse (EDW) enabled data, in a closed-loop fashion, with that data being repackaged and delivered to suit the analytic and decision support needs of each level, in support of better outcomes.

  12. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  13. CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.

    2012-09-19

    THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.

  14. Analytic integrable systems: Analytic normalization and embedding flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang

    In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.

  15. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  16. Towards an Analytic Foundation for Network Architecture

    DTIC Science & Technology

    2010-12-31

    SUPPLEMENTARY NOTES N/A 14. ABSTRACT In this project, we develop the analytic tools of stochastic optimization for wireless network design and apply them...and Mung Chiang, “ DaVinci : Dynamically Adaptive Virtual Networks for a Customized Internet,” in Proc. ACM SIGCOMM CoNext Conference, December 2008

  17. Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.

    2018-05-01

    We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.

  18. Generalized Subset Designs in Analytical Chemistry.

    PubMed

    Surowiec, Izabella; Vikström, Ludvig; Hector, Gustaf; Johansson, Erik; Vikström, Conny; Trygg, Johan

    2017-06-20

    Design of experiments (DOE) is an established methodology in research, development, manufacturing, and production for screening, optimization, and robustness testing. Two-level fractional factorial designs remain the preferred approach due to high information content while keeping the number of experiments low. These types of designs, however, have never been extended to a generalized multilevel reduced design type that would be capable to include both qualitative and quantitative factors. In this Article we describe a novel generalized fractional factorial design. In addition, it also provides complementary and balanced subdesigns analogous to a fold-over in two-level reduced factorial designs. We demonstrate how this design type can be applied with good results in three different applications in analytical chemistry including (a) multivariate calibration using microwave resonance spectroscopy for the determination of water in tablets, (b) stability study in drug product development, and (c) representative sample selection in clinical studies. This demonstrates the potential of generalized fractional factorial designs to be applied in many other areas of analytical chemistry where representative, balanced, and complementary subsets are required, especially when a combination of quantitative and qualitative factors at multiple levels exists.

  19. Query optimization for graph analytics on linked data using SPARQL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung -Hwan

    2015-07-01

    Triplestores that support query languages such as SPARQL are emerging as the preferred and scalable solution to represent data and meta-data as massive heterogeneous graphs using Semantic Web standards. With increasing adoption, the desire to conduct graph-theoretic mining and exploratory analysis has also increased. Addressing that desire, this paper presents a solution that is the marriage of Graph Theory and the Semantic Web. We present software that can analyze Linked Data using graph operations such as counting triangles, finding eccentricity, testing connectedness, and computing PageRank directly on triple stores via the SPARQL interface. We describe the process of optimizing performancemore » of the SPARQL-based implementation of such popular graph algorithms by reducing the space-overhead, simplifying iterative complexity and removing redundant computations by understanding query plans. Our optimized approach shows significant performance gains on triplestores hosted on stand-alone workstations as well as hardware-optimized scalable supercomputers such as the Cray XMT.« less

  20. An Analytical Study of Icing Similitude for Aircraft Engine Testing. Revision

    DTIC Science & Technology

    1987-02-01

    MODELING GEOMETRIES Component Cowl Spinner Fan Blade Fan Stator Exit Vane Probe Approximating Geometry NACA 0012 Airfoil Sphere NACA 0012...DOT/FAA/CT·86/35 AEDC·TR·86·26 An Analytical Study of Icing Similitude for Aircraft Engine Testing c. Scott Bartlett Sverdrup Technology, Inc...8217~,feCa.ORI A n AnalYtical Study )f Icin~ Similitude for Aircraft Engine Tes t tu~ 12. PERSONAL AUTHOR/S) B a r t l e t t , C. Scot t , Sverdrup

  1. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  2. Multivariate optimization of an analytical method for the analysis of dog and cat foods by ICP OES.

    PubMed

    da Costa, Silvânio Silvério Lopes; Pereira, Ana Cristina Lima; Passos, Elisangela Andrade; Alves, José do Patrocínio Hora; Garcia, Carlos Alexandre Borges; Araujo, Rennan Geovanny Oliveira

    2013-04-15

    Experimental design methodology was used to optimize an analytical method for determination of the mineral element composition (Al, Ca, Cd, Cr, Cu, Ba, Fe, K, Mg, Mn, P, S, Sr and Zn) of dog and cat foods. Two-level full factorial design was applied to define the optimal proportions of the reagents used for microwave-assisted sample digestion (2.0 mol L(-1) HNO3 and 6% m/v H2O2). A three-level factorial design for two variables was used to optimize the operational conditions of the inductively coupled plasma optical emission spectrometer, employed for analysis of the extracts. A radiofrequency power of 1.2 kW and a nebulizer argon flow of 1.0 L min(-1) were selected. The limits of quantification (LOQ) were between 0.03 μg g(-1) (Cr, 267.716 nm) and 87 μg g(-1) (Ca, 373.690 nm). The trueness of the optimized method was evaluated by analysis of five certified reference materials (CRMs): wheat flour (NIST 1567a), bovine liver (NIST 1577), peach leaves (NIST 1547), oyster tissue (NIST 1566b), and fish protein (DORM-3). The recovery values obtained for the CRMs were between 80 ± 4% (Cr) and 117 ± 5% (Cd), with relative standard deviations (RSDs) better than 5%, demonstrating that the proposed method offered good trueness and precision. Ten samples of pet food (five each of cat and dog food) were acquired at supermarkets in Aracaju city (Sergipe State, Brazil). Concentrations in the dog food ranged between 7.1 mg kg(-1) (Ba) and 2.7 g kg(-1) (Ca), while for cat food the values were between 3.7 mg kg(-1) (Ba) and 3.0 g kg(-1) (Ca). The concentrations of Ca, K, Mg, P, Cu, Fe, Mn, and Zn in the food were compared with the guidelines of the United States' Association of American Feed Control Officials (AAFCO) and the Brazilian Ministry of Agriculture, Livestock, and Food Supply (Ministério da Agricultura, Pecuária e Abastecimento-MAPA). Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    PubMed

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  4. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China

    PubMed Central

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-01-01

    Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390

  5. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  6. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics

    PubMed Central

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-01-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379

  7. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    PubMed

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  8. Summary of Optimization Techniques That Can Be Applied to Suspension System Design

    DOT National Transportation Integrated Search

    1973-03-01

    Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...

  9. Assessing analytical comparability of biosimilars: GCSF as a case study.

    PubMed

    Nupur, Neh; Singh, Sumit Kumar; Narula, Gunjan; Rathore, Anurag S

    2016-10-01

    The biosimilar industry is witnessing an unprecedented growth with the newer therapeutics increasing in complexity over time. A key step towards development of a biosimilar is to establish analytical comparability with the innovator product, which would otherwise affect the safety/efficacy profile of the product. Choosing appropriate analytical tools that can fulfil this objective by qualitatively and/or quantitatively assessing the critical quality attributes (CQAs) of the product is highly critical for establishing equivalence. These CQAs cover the primary and higher order structures of the product, product related variants and impurities, as well as process related impurities, and host cell related impurities. In the present work, we use such an analytical platform for assessing comparability of five approved Granulocyte Colony Stimulating Factor (GCSF) biosimilars (Emgrast, Lupifil, Colstim, Neukine and Grafeel) to the innovator product, Neupogen(®). The comparability studies involve assessing structural homogeneity, identity, secondary structure, and product related modifications. Physicochemical analytical tools include peptide mapping with mass determination, circular dichroism (CD) spectroscopy, reverse phase chromatography (RPC) and size exclusion chromatography (SEC) have been used in this exercise. Bioactivity assessment include comparison of relative potency through in vitro cell proliferation assays. The results from extensive analytical examination offer robust evidence of structural and biological similarity of the products under consideration with the pertinent innovator product. For the most part, the biosimilar drugs were found to be comparable to the innovator drug anomaly that was identified was that three of the biosimilars had a typical variant which was reported as an oxidized species in the literature. But, upon further investigation using RPC-FLD and ESI-MS we found that this is likely a conformational variant of the biotherapeutic been

  10. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  11. Semi-analytical and Numerical Studies on the Flattened Brazilian Splitting Test Used for Measuring the Indirect Tensile Strength of Rocks

    NASA Astrophysics Data System (ADS)

    Huang, Y. G.; Wang, L. G.; Lu, Y. L.; Chen, J. R.; Zhang, J. H.

    2015-09-01

    Based on the two-dimensional elasticity theory, this study established a mechanical model under chordally opposing distributed compressive loads, in order to perfect the theoretical foundation of the flattened Brazilian splitting test used for measuring the indirect tensile strength of rocks. The stress superposition method was used to obtain the approximate analytic solutions of stress components inside the flattened Brazilian disk. These analytic solutions were then verified through a comparison with the numerical results of the finite element method (FEM). Based on the theoretical derivation, this research carried out a contrastive study on the effect of the flattened loading angles on the stress value and stress concentration degree inside the disk. The results showed that the stress concentration degree near the loading point and the ratio of compressive/tensile stress inside the disk dramatically decreased as the flattened loading angle increased, avoiding the crushing failure near-loading point of Brazilian disk specimens. However, only the tensile stress value and the tensile region were slightly reduced with the increase of the flattened loading angle. Furthermore, this study found that the optimal flattened loading angle was 20°-30°; flattened load angles that were too large or too small made it difficult to guarantee the central tensile splitting failure principle of the Brazilian splitting test. According to the Griffith strength failure criterion, the calculative formula of the indirect tensile strength of rocks was derived theoretically. This study obtained a theoretical indirect tensile strength that closely coincided with existing and experimental results. Finally, this paper simulated the fracture evolution process of rocks under different loading angles through the use of the finite element numerical software ANSYS. The modeling results showed that the Flattened Brazilian Splitting Test using the optimal loading angle could guarantee the tensile

  12. An analytic model for footprint dispersions and its application to mission design

    NASA Technical Reports Server (NTRS)

    Rao, J. R. Jagannatha; Chen, Yi-Chao

    1992-01-01

    This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the Taguchi method. We have studied the mathematical model that forms the basis of the Simulation and Optimization of Rocket Trajectories (SORT) program and developed an analytic method for determining mission reliability with a reduced number of flight simulations. This method can be incorporated in a design algorithm to mathematically optimize different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.

  13. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  14. Light distribution in diffractive multifocal optics and its optimization.

    PubMed

    Portney, Valdemar

    2011-11-01

    To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  15. Semi Active Control of Civil Structures, Analytical and Numerical Studies

    NASA Astrophysics Data System (ADS)

    Kerboua, M.; Benguediab, M.; Megnounif, A.; Benrahou, K. H.; Kaoulala, F.

    numerical example of the parallel R-L piezoelectric vibration shunt control simulated with MATLAB® is presented. An analytical study of the resistor-inductor (R-L) passive piezoelectric vibration shunt control of a cantilever beam was undertaken. The modal and strain analyses were performed by varying the material properties and geometric configurations of the piezoelectric transducer in relation to the structure in order to maximize the mechanical strain produced in the piezoelectric transducer.

  16. Standardless quantification by parameter optimization in electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-11-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.

  17. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  18. An analytical/numerical correlation study of the multiple concentric cylinder model for the thermoplastic response of metal matrix composites

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1993-01-01

    The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.

  19. Heliostat cost optimization study

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  20. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  1. A Comparison of the Glass Meta-Analytic Technique with the Hunter-Schmidt Meta-Analytic Technique on Three Studies from the Education Literature.

    ERIC Educational Resources Information Center

    Hough, Susan L.; Hall, Bruce W.

    The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…

  2. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  3. Optimal CINAHL search strategies for identifying therapy studies and review articles.

    PubMed

    Wong, Sharon S L; Wilczynski, Nancy L; Haynes, R Brian

    2006-01-01

    To design optimal search strategies for locating sound therapy studies and review articles in CiNAHL in the year 2000. An analytic survey was conducted, comparing hand searches of 75 journals with retrievals from CINAHL for 5,020 candidate search terms and 17,900 combinations for therapy and 5,977 combinations for review articles. All articles were rated with purpose and quality indicators. Candidate search strategies were used in CINAHL, and the retrievals were compared with results of the hand searches. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." Operating characteristics of the search strategies were calculated. Of the 1,383 articles about treatment, 506 (36.6%) met basic criteria for scientific merit and 127 (17.9%) of the 711 articles classified as a review met the criteria for systematic reviews. For locating sound treatment studies, a three-term strategy maximized sensitivity at 99.4% but with compromised specificity at 58.3%, and a two-term strategy maximized specificity at 98.5% but with compromised sensitivity at 52.0%. For detecting systematic reviews, a three-term strategy maximized sensitivity at 91.3% while keeping specificity high at 95.4%, and a single-term strategy maximized specificity at 99.6% but with compromised sensitivity at 42.5%. Three-term search strategies optimizing sensitivity and specificity achieved these values over 91% for detecting sound treatment studies and over 76% for detecting systematic reviews. Search strategies combining indexing terms and text words can achieve high sensitivity and specificity for retrieving sound treatment studies and review articles in CINAHL.

  4. Eco-analytical Methodology in Environmental Problems Monitoring

    NASA Astrophysics Data System (ADS)

    Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.

    2017-01-01

    Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.

  5. An analytical and experimental study of crack extension in center-notched composites

    NASA Technical Reports Server (NTRS)

    Beuth, Jack L., Jr.; Herakovich, Carl T.

    1987-01-01

    The normal stress ratio theory for crack extension in anisotropic materials is studied analytically and experimentally. The theory is applied within a microscopic-level analysis of a single center notch of arbitrary orientation in a unidirectional composite material. The bulk of the analytical work of this study applies an elasticity solution for an infinite plate with a center line to obtain critical stress and crack growth direction predictions. An elasticity solution for an infinite plate with a center elliptical flaw is also used to obtain qualitative predictions of the location of crack initiation on the border of a rounded notch tip. The analytical portion of the study includes the formulation of a new crack growth theory that includes local shear stress. Normal stress ratio theory predictions are obtained for notched unidirectional tensile coupons and unidirectional Iosipescu shear specimens. These predictions are subsequently compared to experimental results.

  6. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  7. Analytical approximation and numerical simulations for periodic travelling water waves

    NASA Astrophysics Data System (ADS)

    Kalimeris, Konstantinos

    2017-12-01

    We present recent analytical and numerical results for two-dimensional periodic travelling water waves with constant vorticity. The analytical approach is based on novel asymptotic expansions. We obtain numerical results in two different ways: the first is based on the solution of a constrained optimization problem, and the second is realized as a numerical continuation algorithm. Both methods are applied on some examples of non-constant vorticity. This article is part of the theme issue 'Nonlinear water waves'.

  8. An analytic approach for the study of pulsar spindown

    NASA Astrophysics Data System (ADS)

    Chishtie, F. A.; Zhang, Xiyang; Valluri, S. R.

    2018-07-01

    In this work we develop an analytic approach to study pulsar spindown. We use the monopolar spindown model by Alvarez and Carramiñana (2004 Astron. Astrophys. 414 651–8), which assumes an inverse linear law of magnetic field decay of the pulsar, to extract an all-order formula for the spindown parameters using the Taylor series representation of Jaranowski et al (1998 Phys. Rev. D 58 6300). We further extend the analytic model to incorporate the quadrupole term that accounts for the emission of gravitational radiation, and obtain expressions for the period P and frequency f in terms of transcendental equations. We derive the analytic solution for pulsar frequency spindown in the absence of glitches. We examine the different cases that arise in the analysis of the roots in the solution of the non-linear differential equation for pulsar period evolution. We provide expressions for the spin-down parameters and find that the spindown values are in reasonable agreement with observations. A detection of gravitational waves from pulsars will be the next landmark in the field of multi-messenger gravitational wave astronomy.

  9. Optimization by means of an analytical heat transfer model of a thermal insulation for CSP applications based on radiative shields

    NASA Astrophysics Data System (ADS)

    Gaetano, A.; Roncolato, J.; Montorfano, D.; Barbato, M. C.; Ambrosetti, G.; Pedretti, A.

    2016-05-01

    The employment of new gaseous heat transfer fluids as air or CO2, which are cheaper and environmentally friendly, is drawing more and more attention within the field of Concentrated Solar Power applications. However, despite the advantages, their use requires receivers with a larger heat transfer area and flow cross section with a consequent greater volume of thermal insulation. Solid thermal insulations currently used present high thermal inertia which is energetically penalizing during the daily transient phases faced by the main plant components (e.g. receivers). With the aim of overcoming this drawback a thermal insulation based on radiative shields is presented in this study. Starting from an initial layout comprising a solid thermal insulation layer, the geometry was optimized avoiding the use of the solid insulation keeping performance and fulfilling the geometrical constraints. An analytical Matlab model was implemented to assess the system thermal behavior in terms of heat loss taking into account conductive, convective and radiative contributions. Accurate 2D Computational Fluid Dynamics (CFD) simulations were run to validate the Matlab model which was then used to select the most promising among three new different designs.

  10. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    PubMed

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Modeling and design optimization of adhesion between surfaces at the microscale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylves, Kevin T.

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  12. Analytical Study on Thermal and Mechanical Design of Printed Circuit Heat Exchanger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo

    2013-09-01

    The analytical methodologies for the thermal design, mechanical design and cost estimation of printed circuit heat exchanger are presented in this study. In this study, three flow arrangements of parallel flow, countercurrent flow and crossflow are taken into account. For each flow arrangement, the analytical solution of temperature profile of heat exchanger is introduced. The size and cost of printed circuit heat exchangers for advanced small modular reactors, which employ various coolants such as sodium, molten salts, helium, and water, are also presented.

  13. Analytical study of the liquid phase transient behavior of a high temperature heat pipe. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Roche, Gregory Lawrence

    1988-01-01

    The transient operation of the liquid phase of a high temperature heat pipe is studied. The study was conducted in support of advanced heat pipe applications that require reliable transport of high temperature drops and significant distances under a broad spectrum of operating conditions. The heat pipe configuration studied consists of a sealed cylindrical enclosure containing a capillary wick structure and sodium working fluid. The wick is an annular flow channel configuration formed between the enclosure interior wall and a concentric cylindrical tube of fine pore screen. The study approach is analytical through the solution of the governing equations. The energy equation is solved over the pipe wall and liquid region using the finite difference Peaceman-Rachford alternating direction implicit numerical method. The continuity and momentum equations are solved over the liquid region by the integral method. The energy equation and liquid dynamics equation are tightly coupled due to the phase change process at the liquid-vapor interface. A kinetic theory model is used to define the phase change process in terms of the temperature jump between the liquid-vapor surface and the bulk vapor. Extensive auxiliary relations, including sodium properties as functions of temperature, are used to close the analytical system. The solution procedure is implemented in a FORTRAN algorithm with some optimization features to take advantage of the IBM System/370 Model 3090 vectorization facility. The code was intended for coupling to a vapor phase algorithm so that the entire heat pipe problem could be solved. As a test of code capabilities, the vapor phase was approximated in a simple manner.

  14. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  15. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  16. Analytic model for academic research productivity having factors, interactions and implications

    PubMed Central

    2011-01-01

    Financial support is dear in academia and will tighten further. How can the research mission be accomplished within new restraints? A model is presented for evaluating source components of academic research productivity. It comprises six factors: funding; investigator quality; efficiency of the research institution; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Their interactions produce output and patterned influences between factors. Strategies for optimizing output are enabled. PMID:22130145

  17. Boron doped diamond sensor for sensitive determination of metronidazole: Mechanistic and analytical study by cyclic voltammetry and square wave voltammetry.

    PubMed

    Ammar, Hafedh Belhadj; Brahim, Mabrouk Ben; Abdelhédi, Ridha; Samet, Youssef

    2016-02-01

    The performance of boron-doped diamond (BDD) electrode for the detection of metronidazole (MTZ) as the most important drug of the group of 5-nitroimidazole was proven using cyclic voltammetry (CV) and square wave voltammetry (SWV) techniques. A comparison study between BDD, glassy carbon and silver electrodes on the electrochemical response was carried out. The process is pH-dependent. In neutral and alkaline media, one irreversible reduction peak related to the hydroxylamine derivative formation was registered, involving a total of four electrons. In acidic medium, a prepeak appears probably related to the adsorption affinity of hydroxylamine at the electrode surface. The BDD electrode showed higher sensitivity and reproducibility analytical response, compared with the other electrodes. The higher reduction peak current was registered at pH11. Under optimal conditions, a linear analytical curve was obtained for the MTZ concentration in the range of 0.2-4.2μmolL(-1), with a detection limit of 0.065μmolL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  19. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  20. An analytical study on groundwater flow in drainage basins with horizontal wells

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Zhi; Jiang, Xiao-Wei; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2014-06-01

    Analytical studies on release/capture zones are often limited to a uniform background groundwater flow. In fact, for basin-scale problems, the undulating water table would lead to the development of hierarchically nested flow systems, which are more complex than a uniform flow. Under the premise that the water table is a replica of undulating topography and hardly influenced by wells, an analytical solution of hydraulic head is derived for a two-dimensional cross section of a drainage basin with horizontal injection/pumping wells. Based on the analytical solution, distributions of hydraulic head, stagnation points and flow systems (including release/capture zones) are explored. The superposition of injection/pumping wells onto the background flow field leads to the development of new internal stagnation points and new flow systems (including release/capture zones). Generally speaking, the existence of n injection/pumping wells would result in up to n new internal stagnation points and up to 2n new flow systems (including release/capture zones). The analytical study presented, which integrates traditional well hydraulics with the theory of regional groundwater flow, is useful in understanding basin-scale groundwater flow influenced by human activities.

  1. Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.

    PubMed

    Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U

    2015-05-01

    The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  3. A new method to optimize natural convection heat sinks

    NASA Astrophysics Data System (ADS)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  4. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  5. Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization

    PubMed Central

    Vutova, Katia; Donchev, Veliko

    2013-01-01

    Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351

  6. Educational Optimism among Parents: A Pilot Study

    ERIC Educational Resources Information Center

    Räty, Hannu; Kasanen, Kati

    2016-01-01

    This study explored parents' (N = 351) educational optimism in terms of their trust in the possibilities of school to develop children's intelligence. It was found that educational optimism could be depicted as a bipolar factor with optimism and pessimism on the opposing ends of the same dimension. Optimistic parents indicated more satisfaction…

  7. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  8. Analytical study of robustness of a negative feedback oscillator by multiparameter sensitivity

    PubMed Central

    2014-01-01

    Background One of the distinctive features of biological oscillators such as circadian clocks and cell cycles is robustness which is the ability to resume reliable operation in the face of different types of perturbations. In the previous study, we proposed multiparameter sensitivity (MPS) as an intelligible measure for robustness to fluctuations in kinetic parameters. Analytical solutions directly connect the mechanisms and kinetic parameters to dynamic properties such as period, amplitude and their associated MPSs. Although negative feedback loops are known as common structures to biological oscillators, the analytical solutions have not been presented for a general model of negative feedback oscillators. Results We present the analytical expressions for the period, amplitude and their associated MPSs for a general model of negative feedback oscillators. The analytical solutions are validated by comparing them with numerical solutions. The analytical solutions explicitly show how the dynamic properties depend on the kinetic parameters. The ratio of a threshold to the amplitude has a strong impact on the period MPS. As the ratio approaches to one, the MPS increases, indicating that the period becomes more sensitive to changes in kinetic parameters. We present the first mathematical proof that the distributed time-delay mechanism contributes to making the oscillation period robust to parameter fluctuations. The MPS decreases with an increase in the feedback loop length (i.e., the number of molecular species constituting the feedback loop). Conclusions Since a general model of negative feedback oscillators was employed, the results shown in this paper are expected to be true for many of biological oscillators. This study strongly supports that the hypothesis that phosphorylations of clock proteins contribute to the robustness of circadian rhythms. The analytical solutions give synthetic biologists some clues to design gene oscillators with robust and desired period

  9. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  11. Optimal control of HIV/AIDS dynamic: Education and treatment

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  12. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  13. Extraction, isolation, and purification of analytes from samples of marine origin--a multivariate task.

    PubMed

    Liguori, Lucia; Bjørsvik, Hans-René

    2012-12-01

    The development of a multivariate study for a quantitative analysis of six different polybrominated diphenyl ethers (PBDEs) in tissue of Atlantic Salmo salar L. is reported. An extraction, isolation, and purification process based on an accelerated solvent extraction system was designed, investigated, and optimized by means of statistical experimental design and multivariate data analysis and regression. An accompanying gas chromatography-mass spectrometry analytical method was developed for the identification and quantification of the analytes, BDE 28, BDE 47, BDE 99, BDE 100, BDE 153, and BDE 154. These PBDEs have been used in commercial blends that were used as flame-retardants for a variety of materials, including electronic devices, synthetic polymers and textiles. The present study revealed that an extracting solvent mixture composed of hexane and CH₂Cl₂ (10:90) provided excellent recoveries of all of the six PBDEs studied herein. A somewhat lower polarity in the extracting solvent, hexane and CH₂Cl₂ (40:60) decreased the analyte %-recoveries, which still remain acceptable and satisfactory. The study demonstrates the necessity to perform an intimately investigation of the extraction and purification process in order to achieve quantitative isolation of the analytes from the specific matrix. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Analytical studies on the instabilities of heterogeneous intelligent traffic flow

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2013-10-01

    It has been widely reported in literature that a small perturbation in traffic flow such as a sudden deceleration of a vehicle could lead to the formation of traffic jams without a clear bottleneck. These traffic jams are usually related to instabilities in traffic flow. The applications of intelligent traffic systems are a potential solution to reduce the amplitude or to eliminate the formation of such traffic instabilities. A lot of research has been conducted to theoretically study the effect of intelligent vehicles, for example adaptive cruise control vehicles, using either computer simulation or analytical method. However, most current analytical research has only applied to single class traffic flow. To this end, the main topic of this paper is to perform a linear stability analysis to find the stability threshold of heterogeneous traffic flow using microscopic models, particularly the effect of intelligent vehicles on heterogeneous (or multi-class) traffic flow instabilities. The analytical results will show how intelligent vehicle percentages affect the stability of multi-class traffic flow.

  15. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a

  16. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  17. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  18. Combined control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.

    1989-01-01

    An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.

  19. A Systematic Mapping on the Learning Analytics Field and Its Analysis in the Massive Open Online Courses Context

    ERIC Educational Resources Information Center

    Moissa, Barbara; Gasparini, Isabela; Kemczinski, Avanilde

    2015-01-01

    Learning Analytics (LA) is a field that aims to optimize learning through the study of dynamical processes occurring in the students' context. It covers the measurement, collection, analysis and reporting of data about students and their contexts. This study aims at surveying existing research on LA to identify approaches, topics, and needs for…

  20. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  1. Experimental and analytical study of high velocity impact on Kevlar/Epoxy composite plates

    NASA Astrophysics Data System (ADS)

    Sikarwar, Rahul S.; Velmurugan, Raman; Madhu, Velmuri

    2012-12-01

    In the present study, impact behavior of Kevlar/Epoxy composite plates has been carried out experimentally by considering different thicknesses and lay-up sequences and compared with analytical results. The effect of thickness, lay-up sequence on energy absorbing capacity has been studied for high velocity impact. Four lay-up sequences and four thickness values have been considered. Initial velocities and residual velocities are measured experimentally to calculate the energy absorbing capacity of laminates. Residual velocity of projectile and energy absorbed by laminates are calculated analytically. The results obtained from analytical study are found to be in good agreement with experimental results. It is observed from the study that 0/90 lay-up sequence is most effective for impact resistance. Delamination area is maximum on the back side of the plate for all thickness values and lay-up sequences. The delamination area on the back is maximum for 0/90/45/-45 laminates compared to other lay-up sequences.

  2. Collisional evolution - an analytical study for the non steady-state mass distribution.

    NASA Astrophysics Data System (ADS)

    Vieira Martins, R.

    1999-05-01

    To study the collisional evolution of asteroidal groups one can use an analytical solution for the self-similar collision cascades. This solution is suitable to study the steady-state mass distribution of the collisional fragmentation. However, out of the steady-state conditions, this solution is not satisfactory for some values of the collisional parameters. In fact, for some values for the exponent of the mass distribution power law of an asteroidal group and its relation to the exponent of the function which describes "how rocks break" the author arrives at singular points for the equation which describes the collisional evolution. These singularities appear since some approximations are usually made in the laborious evaluation of many integrals that appear in the analytical calculations. They concern the cutoff for the smallest and the largest bodies. These singularities set some restrictions to the study of the analytical solution for the collisional equation. To overcome these singularities the author performed an algebraic computation considering the smallest and the largest bodies and he obtained the analytical expressions for the integrals that describe the collisional evolution without restriction on the parameters. However, the new distribution is more sensitive to the values of the collisional parameters. In particular the steady-state solution for the differential mass distribution has exponents slightly different from 11/6 for the usual parameters in the asteroid belt. The sensitivity of this distribution with respect to the parameters is analyzed for the usual values in the asteroidal groups. With an expression for the mass distribution without singularities, one can evaluate also its time evolution. The author arrives at an analytical expression given by a power series of terms constituted by a small parameter multiplied by the mass to an exponent, which depends on the initial power law distribution. This expression is a formal solution for the

  3. Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts.

    PubMed

    Kang, Youn-Ah; Stasko, J

    2012-12-01

    While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.

  4. Evaluation of performance of three different hybrid mesoporous solids based on silica for preconcentration purposes in analytical chemistry: From the study of sorption features to the determination of elements of group IB.

    PubMed

    Kim, Manuela Leticia; Tudino, Mabel Beatríz

    2010-08-15

    Several studies involving the physicochemical interaction of three silica based hybrid mesoporous materials with metal ions of the group IB have been performed in order to employ them for preconcentration purposes in the determination of traces of Cu(II), Ag(I) and Au(III). The three solids were obtained from mesoporous silica functionalized with 3-aminopropyl (APS), 3-mercaptopropyl (MPS) and N-[2-aminoethyl]-3-aminopropyl (NN) groups, respectively. Adsorption capacities for Au, Cu and Ag were calculated using Langmuir's isotherm model and then, the optimal values for the retention of each element onto each one of the solids were found. Physicochemical data obtained under thermodynamic equilibrium and under kinetic conditions - imposed by flow through experiments - allowed the design of simple analytical methodologies where the solids were employed as fillings of microcolumns held in continuous systems coupled on-line to an atomic absorption spectrometry. In order to control the interaction between the filling and the analyte at short times (flow through conditions) and thus, its effect on the analytical signal and the presence of interferences, the initial adsorption velocities were calculated using the pseudo second order model. All these experiments allowed the comparison of the solids in terms of their analytical behaviour at the moment of facing the determination of the three elements. Under optimized conditions mainly given by the features of the filling, the analytical methodologies developed in this work showed excellent performances with limits of detection of 0.14, 0.02 and 0.025 microg L(-1) and RSD % values of 3.4, 2.7 and 3.1 for Au, Cu and Ag, respectively. A full discussion of the main findings on the interaction metal ions/fillings will be provided. The analytical results for the determination of the three metals will be also presented. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Analytical solution of a stochastic model of risk spreading with global coupling

    NASA Astrophysics Data System (ADS)

    Morita, Satoru; Yoshimura, Jin

    2013-11-01

    We study a stochastic matrix model to understand the mechanics of risk spreading (or bet hedging) by dispersion. Up to now, this model has been mostly dealt with numerically, except for the well-mixed case. Here, we present an analytical result that shows that optimal dispersion leads to Zipf's law. Moreover, we found that the arithmetic ensemble average of the total growth rate converges to the geometric one, because the sample size is finite.

  6. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  7. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  8. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  9. Analytical models integrated with satellite images for optimized pest management

    USDA-ARS?s Scientific Manuscript database

    The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...

  10. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  11. Reverse transcription-polymerase chain reaction molecular testing of cytology specimens: Pre-analytic and analytic factors.

    PubMed

    Bridge, Julia A

    2017-01-01

    The introduction of molecular testing into cytopathology laboratory practice has expanded the types of samples considered feasible for identifying genetic alterations that play an essential role in cancer diagnosis and treatment. Reverse transcription-polymerase chain reaction (RT-PCR), a sensitive and specific technical approach for amplifying a defined segment of RNA after it has been reverse-transcribed into its DNA complement, is commonly used in clinical practice for the identification of recurrent or tumor-specific fusion gene events. Real-time RT-PCR (quantitative RT-PCR), a technical variation, also permits the quantitation of products generated during each cycle of the polymerase chain reaction process. This review addresses qualitative and quantitative pre-analytic and analytic considerations of RT-PCR as they relate to various cytologic specimens. An understanding of these aspects of genetic testing is central to attaining optimal results in the face of the challenges that cytology specimens may present. Cancer Cytopathol 2017;125:11-19. © 2016 American Cancer Society. © 2016 American Cancer Society.

  12. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    NASA Astrophysics Data System (ADS)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral

  13. Optimal design of damping layers in SMA/GFRP laminated hybrid composites

    NASA Astrophysics Data System (ADS)

    Haghdoust, P.; Cinquemani, S.; Lo Conte, A.; Lecis, N.

    2017-10-01

    This work describes the optimization of the shape profiles for shape memory alloys (SMA) sheets in hybrid layered composite structures, i.e. slender beams or thinner plates, designed for the passive attenuation of flexural vibrations. The paper starts with the description of the material and architecture of the investigated hybrid layered composite. An analytical method, for evaluating the energy dissipation inside a vibrating cantilever beam is developed. The analytical solution is then followed by a shape profile optimization of the inserts, using a genetic algorithm to minimize the SMA material layer usage, while maintaining target level of structural damping. Delamination problem at SMA/glass fiber reinforced polymer interface is discussed. At the end, the proposed methodology has been applied to study the hybridization of a wind turbine layered structure blade with SMA material, in order to increase its passive damping.

  14. Seamless Digital Environment – Plan for Data Analytics Use Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Bly, Aaron Douglas

    The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able tomore » provide the data in an easy, quick and reliable method. A common method is to create a “one stop shop” application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the “siloed” data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team’s control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the

  15. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  16. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE PAGES

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk; ...

    2016-08-01

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  17. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  18. Heat addition to a subsonic boundary layer: A preliminary analytical study

    NASA Technical Reports Server (NTRS)

    Macha, J. M.; Norton, D. J.

    1971-01-01

    A preliminary analytical study of the effects of heat addition to the subsonic boundary layer flow over a typical airfoil shape is presented. This phenomenon becomes of interest in the space shuttle mission since heat absorbed by the wing structure during re-entry will be rejected to the boundary layer during the subsequent low speed maneuvering and landing phase. A survey of existing literature and analytical solutions for both laminar and turbulent flow indicate that a heated surface generally destabilizes the boundary layer. Specifically, the boundary layer thickness is increased, the skin friction at the surface is decreased and the point of flow separation is moved forward. In addition, limited analytical results predict that the angle of attack at which a heated airfoil will stall is significantly less than the stall angle of an unheated wing. These effects could adversely affect the lift and drag, and thus the maneuvering capabilities of booster and orbiter shuttle vehicles.

  19. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  20. DoE optimization of a mercury isotope ratio determination method for environmental studies.

    PubMed

    Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico

    2016-05-15

    By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Insight and Action Analytics: Three Case Studies to Consider

    ERIC Educational Resources Information Center

    Milliron, Mark David; Malcolm, Laura; Kil, David

    2014-01-01

    Civitas Learning was conceived as a community of practice, bringing together forward-thinking leaders from diverse higher education institutions to leverage insight and action analytics in their ongoing efforts to help students learn well and finish strong. We define insight and action analytics as drawing, federating, and analyzing data from…

  2. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase

  3. A Field Study Program in Analytical Chemistry for College Seniors.

    ERIC Educational Resources Information Center

    Langhus, D. L.; Flinchbaugh, D. A.

    1986-01-01

    Describes an elective field study program at Moravian College (Pennsylvania) in which seniors in analytical chemistry obtain first-hand experience at Bethlehem Steel Corporation. Discusses the program's planning phase, some method development projects done by students, experiences received in laboratory operations, and the evaluation of student…

  4. NHEXAS PHASE I ARIZONA STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 6 metals in 176 urine samples over 176 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The sample consists of the fir...

  5. NHEXAS PHASE I ARIZONA STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 165 blood samples over 165 households. Each sample was collected as a venous sample from the primary respondent within each household during Stage III of the NHEXAS study. The samples...

  6. NHEXAS PHASE I MARYLAND STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 3 metals in 376 urine samples over 80 households. Each sample was collected from the primary respondent within each household during the study and represented the first morning void of either Day ...

  7. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  8. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  9. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  10. Analytic study of a rolling sphere on a rough surface

    NASA Astrophysics Data System (ADS)

    Florea, Olivia A.; Rosca, Ileana C.

    2016-11-01

    In this paper it is realized an analytic study of the rolling's sphere on a rough horizontal plane under the action of its own gravity. The necessities of integration of the system of dynamical equations of motion lead us to find a reference system where the motion equations should be transformed into simpler expressions and which, in the presence of some significant hypothesis to permit the application of some original methods of analytical integration. In technical applications, the bodies may have a free rolling motion or a motion constrained by geometrical relations in assemblies of parts and machine parts. This study involves a lot of investigations in the field of tribology and of applied dynamics accompanied by experiments. Multiple recordings of several trajectories of the sphere, as well as their treatment of images, also followed by statistical processing experimental data allowed highlighting a very good agreement between the theoretical findings and experimental results.

  11. Lack of correlation between reaction speed and analytical sensitivity in isothermal amplification reveals the value of digital methods for optimization: validation using digital real-time RT-LAMP.

    PubMed

    Khorosheva, Eugenia M; Karymov, Mikhail A; Selck, David A; Ismagilov, Rustem F

    2016-01-29

    In this paper, we asked if it is possible to identify the best primers and reaction conditions based on improvements in reaction speed when optimizing isothermal reactions. We used digital single-molecule, real-time analyses of both speed and efficiency of isothermal amplification reactions, which revealed that improvements in the speed of isothermal amplification reactions did not always correlate with improvements in digital efficiency (the fraction of molecules that amplify) or with analytical sensitivity. However, we observed that the speeds of amplification for single-molecule (in a digital device) and multi-molecule (e.g. in a PCR well plate) formats always correlated for the same conditions. Also, digital efficiency correlated with the analytical sensitivity of the same reaction performed in a multi-molecule format. Our finding was supported experimentally with examples of primer design, the use or exclusion of loop primers in different combinations, and the use of different enzyme mixtures in one-step reverse-transcription loop-mediated amplification (RT-LAMP). Our results show that measuring the digital efficiency of amplification of single-template molecules allows quick, reliable comparisons of the analytical sensitivity of reactions under any two tested conditions, independent of the speeds of the isothermal amplification reactions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Lack of correlation between reaction speed and analytical sensitivity in isothermal amplification reveals the value of digital methods for optimization: validation using digital real-time RT-LAMP

    PubMed Central

    Khorosheva, Eugenia M.; Karymov, Mikhail A.; Selck, David A.; Ismagilov, Rustem F.

    2016-01-01

    In this paper, we asked if it is possible to identify the best primers and reaction conditions based on improvements in reaction speed when optimizing isothermal reactions. We used digital single-molecule, real-time analyses of both speed and efficiency of isothermal amplification reactions, which revealed that improvements in the speed of isothermal amplification reactions did not always correlate with improvements in digital efficiency (the fraction of molecules that amplify) or with analytical sensitivity. However, we observed that the speeds of amplification for single-molecule (in a digital device) and multi-molecule (e.g. in a PCR well plate) formats always correlated for the same conditions. Also, digital efficiency correlated with the analytical sensitivity of the same reaction performed in a multi-molecule format. Our finding was supported experimentally with examples of primer design, the use or exclusion of loop primers in different combinations, and the use of different enzyme mixtures in one-step reverse-transcription loop-mediated amplification (RT-LAMP). Our results show that measuring the digital efficiency of amplification of single-template molecules allows quick, reliable comparisons of the analytical sensitivity of reactions under any two tested conditions, independent of the speeds of the isothermal amplification reactions. PMID:26358811

  13. Aerothermodynamic shape optimization of hypersonic blunt bodies

    NASA Astrophysics Data System (ADS)

    Eyi, Sinan; Yumuşak, Mine

    2015-07-01

    The aim of this study is to develop a reliable and efficient design tool that can be used in hypersonic flows. The flow analysis is based on the axisymmetric Euler/Navier-Stokes and finite-rate chemical reaction equations. The equations are coupled simultaneously and solved implicitly using Newton's method. The Jacobian matrix is evaluated analytically. A gradient-based numerical optimization is used. The adjoint method is utilized for sensitivity calculations. The objective of the design is to generate a hypersonic blunt geometry that produces the minimum drag with low aerodynamic heating. Bezier curves are used for geometry parameterization. The performances of the design optimization method are demonstrated for different hypersonic flow conditions.

  14. Analytical study of the reflection and transmission coefficient of the submarine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Guangli; Hao, Chongtao; Yao, Chen

    2018-05-01

    The analytical study of the reflection and transmission coefficient of the seafloor interface is essential for the characterization of the ocean bottom in marine seismic exploration. Based on the boundary conditions of the seafloor interface, the analytical expression of the reflection and transmission coefficient at the submarine interface is derived in this study by using the steady-state wave solution of the elastic wave in a homogeneous, isotropic medium. With this analytical expression, the characteristics of the reflection and transmission coefficient at the submarine interface are analysed and discussed using critical angles. The results show that the change in the reflection and transmission coefficient with the incidence angle presents a "segmented" characteristic, in which the critical angle is the dividing point. The amplitude value and phase angle of the coefficient at the submarine interface change dramatically at the critical angle, which is related to the P- and S-wave velocities in the seabed layer. Compared with the stiff seabed, the soft seabed has a larger P-wave critical angle and an absence of the converted S-wave critical angle, owing to the low P- and S-wave velocities in the solid seabed layer. By analysing and discussing the special changes that occur in the coefficient values at the critical angle, the reflection and transmission characteristics of the different incident angles are obtained. Synthetic models of both stiff and soft seafloors are provided in this study to verify the analytical results. Finally, we compared our synthetic results with real data from the Gulf of Mexico, which enabled the validation of our conclusions.

  15. Diffractive variable beam splitter: optimal design.

    PubMed

    Borghi, R; Cincotti, G; Santarsiero, M

    2000-01-01

    The analytical expression of the phase profile of the optimum diffractive beam splitter with an arbitrary power ratio between the two output beams is derived. The phase function is obtained by an analytical optimization procedure such that the diffraction efficiency of the resulting optical element is the highest for an actual device. Comparisons are presented with the efficiency of a diffractive beam splitter specified by a sawtooth phase function and with the pertinent theoretical upper bound for this type of element.

  16. Analytical performance of a versatile laboratory microscopic X-ray fluorescence system for metal uptake studies on argillaceous rocks

    NASA Astrophysics Data System (ADS)

    Gergely, Felicián; Osán, János; Szabó, B. Katalin; Török, Szabina

    2016-02-01

    Laboratory-scale microscopic X-ray fluorescence (micro-XRF) plays an increasingly important role in various fields where multielemental investigations of samples are indispensable. In case of geological samples, the reasonable detection limits (LOD) and spatial resolutions are necessary to identify the trace element content in microcrystalline level. The present study focuses on the analytical performance of a versatile laboratory-scale micro-XRF system with various options of X-ray sources and detectors to find the optimal experimental configuration in terms of sensitivities and LOD for selected elements in loaded petrographic thin sections. The method was tested for sorption studies involving thin sections prepared from cores of Boda Claystone Formation, which is a potential site for a high-level radioactive waste repository. Loaded ions in the sorption measurements were Cs(I) and Ni(II) chemically representing fission and corrosion products. Based on the collected elemental maps, the correlation between the elements representative of main rock components and the selected loaded ion was studied. For the elements of interest, Cs(I) and Ni(II) low-power iMOXS source with polycapillary and silicon drift detector was found to be the best configuration to reach the optimal LOD values. Laboratory micro-XRF was excellent to identify the responsible key minerals for the uptake of Cs(I). In case of nickel, careful corrections were needed because of the relatively high Ca content of the rock samples. The results were compared to synchrotron radiation micro-XRF.

  17. A Crowdsensing Based Analytical Framework for Perceptional Degradation of OTT Web Browsing.

    PubMed

    Li, Ke; Wang, Hai; Xu, Xiaolong; Du, Yu; Liu, Yuansheng; Ahmad, M Omair

    2018-05-15

    Service perception analysis is crucial for understanding both user experiences and network quality as well as for maintaining and optimizing of mobile networks. Given the rapid development of mobile Internet and over-the-top (OTT) services, the conventional network-centric mode of network operation and maintenance is no longer effective. Therefore, developing an approach to evaluate and optimizing users' service perceptions has become increasingly important. Meanwhile, the development of a new sensing paradigm, mobile crowdsensing (MCS), makes it possible to evaluate and analyze the user's OTT service perception from end-user's point of view other than from the network side. In this paper, the key factors that impact users' end-to-end OTT web browsing service perception are analyzed by monitoring crowdsourced user perceptions. The intrinsic relationships among the key factors and the interactions between key quality indicators (KQI) are evaluated from several perspectives. Moreover, an analytical framework of perceptional degradation and a detailed algorithm are proposed whose goal is to identify the major factors that impact the perceptional degradation of web browsing service as well as their significance of contribution. Finally, a case study is presented to show the effectiveness of the proposed method using a dataset crowdsensed from a large number of smartphone users in a real mobile network. The proposed analytical framework forms a valuable solution for mobile network maintenance and optimization and can help improve web browsing service perception and network quality.

  18. Further analytical study of hybrid rocket combustion

    NASA Technical Reports Server (NTRS)

    Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.

    1972-01-01

    Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.

  19. Evaluation of analytical performance based on partial order methodology.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  20. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence

  1. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix

  2. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  3. Integrated Analytical Evaluation and Optimization of Model Parameters against Preprocessed Measurement Data

    DTIC Science & Technology

    1989-06-23

    Iterations .......................... 86 3.2 Comparison between MACH and POLAR ......................... 90 3.3 Flow Chart for VSTS Algorithm...The most recent changes are: a) development of the VSTS (velocity space topology search) algorithm for calculating particle densities b) extension...with simple analytic models. The largest modification of the MACH code was the implementation of the VSTS procedure, which constituted a complete

  4. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  5. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Analytical study of magnetohydrodynamic propulsion stability

    NASA Astrophysics Data System (ADS)

    Abdollahzadeh Jamalabadi, M. Y.

    2014-09-01

    In this paper an analytical solution for the stability of the fully developed flow drive in a magneto-hydro-dynamic pump with pulsating transverse Eletro-magnetic fields is presented. To do this, a theoretical model of the flow is developed and the analytical results are obtained for both the cylindrical and Cartesian configurations that are proper to use in the propulsion of marine vessels. The governing parabolic momentum PDEs are transformed into an ordinary differential equation using approximate velocity distribution. The numerical results are obtained and asymptotic analyses are built to discover the mathematical behavior of the solutions. The maximum velocity in a magneto-hydro-dynamic pump versus time for various values of the Stuart number, electro-magnetic interaction number, Reynolds number, aspect ratio, as well as the magnetic and electrical angular frequency and the shift of the phase angle is presented. Results show that for a high Stuart number there is a frequency limit for stability of the fluid flow in a certain direction of the flow. This stability frequency is dependent on the geometric parameters of a channel.

  7. Do Premarital Education Programs Really Work? A Meta-Analytic Study

    ERIC Educational Resources Information Center

    Fawcett, Elizabeth B.; Hawkins, Alan J.; Blanchard, Victoria L.; Carroll, Jason S.

    2010-01-01

    Previous studies (J. S. Carroll & W. J. Doherty, 2003) have asserted that premarital education programs have a positive effect on program participants. Using meta-analytic methods of current best practices to look across the entire body of published and unpublished evaluation research on premarital education, we found a more complex pattern of…

  8. Dimensions of Early Speech Sound Disorders: A Factor Analytic Study

    ERIC Educational Resources Information Center

    Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Stein, Catherine M.; Shriberg, Lawrence D.; Iyengar, Sudha K.; Taylor, H. Gerry

    2006-01-01

    The goal of this study was to classify children with speech sound disorders (SSD) empirically, using factor analytic techniques. Participants were 3-7-year olds enrolled in speech/language therapy (N=185). Factor analysis of an extensive battery of speech and language measures provided support for two distinct factors, representing the skill…

  9. Combining Simulation and Optimization Models for Hardwood Lumber Production

    Treesearch

    G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman

    1991-01-01

    Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...

  10. Analytical design and evaluation of an active control system for helicopter vibration reduction and gust response alleviation

    NASA Technical Reports Server (NTRS)

    Taylor, R. B.; Zwicke, P. E.; Gold, P.; Miao, W.

    1980-01-01

    An analytical study was conducted to define the basic configuration of an active control system for helicopter vibration and gust response alleviation. The study culminated in a control system design which has two separate systems: narrow band loop for vibration reduction and wider band loop for gust response alleviation. The narrow band vibration loop utilizes the standard swashplate control configuration to input controller for the vibration loop is based on adaptive optimal control theory and is designed to adapt to any flight condition including maneuvers and transients. The prime characteristics of the vibration control system is its real time capability. The gust alleviation control system studied consists of optimal sampled data feedback gains together with an optimal one-step-ahead prediction. The prediction permits the estimation of the gust disturbance which can then be used to minimize the gust effects on the helicopter.

  11. Optimal time-domain technique for pulse width modulation in power electronics

    NASA Astrophysics Data System (ADS)

    Mayergoyz, I.; Tyagi, S.

    2018-05-01

    Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  12. Study on bending behaviour of nickel–titanium rotary endodontic instruments by analytical and numerical analyses

    PubMed Central

    Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S

    2013-01-01

    Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762

  13. Collisional evolution - an analytical study for the nonsteady-state mass distribution

    NASA Astrophysics Data System (ADS)

    Martins, R. Vieira

    1999-05-01

    To study the collisional evolution of asteroidal groups we can use an analytical solutionfor the self-similar collision cascades. This solution is suitable to study the steady-state massdistribution of the collisional fragmentation. However, out of the steady-state conditions, thissolution is not satisfactory for some values of the collisional parameters. In fact, for some valuesfor the exponent of the mass distribution power law of an asteroidal group and its relation to theexponent of the function which describes how rocks break we arrive at singular points for theequation which describes the collisional evolution. These singularities appear since someapproximations are usually made in the laborious evaluation of many integrals that appear in theanalytical calculations. They concern the cutoff for the smallest and the largest bodies. Thesesingularities set some restrictions to the study of the analytical solution for the collisionalequation. To overcome these singularities we performed an algebraic computationconsidering the smallest and the largest bodies and we obtained the analytical expressions for theintegrals that describe the collisional evolution without restriction on the parameters. However,the new distribution is more sensitive to the values of the collisional parameters. In particular thesteady-state solution for the differential mass distribution has exponents slightly different from11⧸6 for the usual parameters in the Asteroid Belt. The sensitivity of this distribution with respectto the parameters is analyzed for the usual values in the asteroidal groups. With anexpression for the mass distribution without singularities, we can evaluate also its time evolution.We arrive at an analytical expression given by a power series of terms constituted by a smallparameter multiplied by the mass to an exponent, which depends on the initial power lawdistribution. This expression is a formal solution for the equation which describes the collisionalevolution

  14. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  15. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives, resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  16. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  17. Optimization of a coaxial electron cyclotron resonance plasma thruster with an analytical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannat, F., E-mail: felix.cannat@onera.fr, E-mail: felix.cannat@gmail.com; Lafleur, T.; Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universites, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau

    2015-05-15

    A new cathodeless plasma thruster currently under development at Onera is presented and characterized experimentally and analytically. The coaxial thruster consists of a microwave antenna immersed in a magnetic field, which allows electron heating via cyclotron resonance. The magnetic field diverges at the thruster exit and forms a nozzle that accelerates the quasi-neutral plasma to generate a thrust. Different thruster configurations are tested, and in particular, the influence of the source diameter on the thruster performance is investigated. At microwave powers of about 30 W and a xenon flow rate of 0.1 mg/s (1 SCCM), a mass utilization of 60% and amore » thrust of 1 mN are estimated based on angular electrostatic probe measurements performed downstream of the thruster in the exhaust plume. Results are found to be in fair agreement with a recent analytical helicon thruster model that has been adapted for the coaxial geometry used here.« less

  18. Analytic studies of the hard dumbell fluid

    NASA Astrophysics Data System (ADS)

    Morriss, G. P.; Cummings, P. T.

    A closed form analytic theory for the structure of the hard dumbell fluid is introduced and evaluated. It is found to be comparable in accuracy to the reference interaction site approximation (RISA) of Chandler and Andersen.

  19. Optimizing cosmological surveys in a crowded market

    NASA Astrophysics Data System (ADS)

    Bassett, Bruce A.

    2005-04-01

    Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.

  20. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  1. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  2. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  3. The Skinny on Big Data in Education: Learning Analytics Simplified

    ERIC Educational Resources Information Center

    Reyes, Jacqueleen A.

    2015-01-01

    This paper examines the current state of learning analytics (LA), its stakeholders and the benefits and challenges these stakeholders face. LA is a field of research that involves the gathering, analyzing and reporting of data related to learners and their environments with the purpose of optimizing the learning experience. Stakeholders in LA are…

  4. Trajectory optimization for the National Aerospace Plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1993-01-01

    The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.

  5. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  6. Optimal designs for population pharmacokinetic studies of the partner drugs co-administered with artemisinin derivatives in patients with uncomplicated falciparum malaria.

    PubMed

    Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A

    2012-07-11

    Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.

  7. Post-analytical stability of 23 common chemistry and immunochemistry analytes in incurred samples.

    PubMed

    Nielsen, Betina Klint; Frederiksen, Tina; Friis-Hansen, Lennart; Larsen, Pia Bükmann

    2017-12-01

    Storage of blood samples after centrifugation, decapping and initial sampling allows ordering of additional blood tests. The pre-analytic stability of biochemistry and immunochemistry analytes has been studied in detail, but little is known about the post-analytical stability in incurred samples. We examined the stability of 23 routine analytes on the Dimension Vista® (Siemens Healthineers, Denmark): 42-60 routine samples in lithium-heparin gel tubes (Vacutainer, BD, USA) were centrifuged at 3000×g for 10min. Immediately after centrifugation, initial concentration of analytes were measured in duplicate (t=0). The tubes were stored decapped at room temperature and re-analyzed after 2, 4, 6, 8 and 10h in singletons. The concentration from reanalysis were normalized to initial concentration (t=0). Internal acceptance criteria for bias and total error were used to determine stability of each analyte. Additionally, evaporation from the decapped blood collection tubes and the residual platelet count in the plasma after centrifugation were quantified. We report a post-analytical stability of most routine analytes of ≥8h and do therefore - with few exceptions - suggest a standard 8hour-time limit for reordering and reanalysis of analytes in incurred samples. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  9. NHEXAS PHASE I MARYLAND STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains analytical results for measurements of up to 9 pesticides in 345 urine samples over 79 households. Each sample was collected from the primary respondent within each household during the study and represented the first morning ...

  10. NHEXAS PHASE I ARIZONA STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains analytical results for measurements of up to 4 pesticide metabolites in 176 urine samples over 176 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. ...

  11. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is

  12. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  13. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  14. Multicentric Comparative Analytical Performance Study for Molecular Detection of Low Amounts of Toxoplasma gondii from Simulated Specimens▿ †

    PubMed Central

    Sterkers, Yvon; Varlet-Marie, Emmanuelle; Cassaing, Sophie; Brenier-Pinchart, Marie-Pierre; Brun, Sophie; Dalle, Frédéric; Delhaes, Laurence; Filisetti, Denis; Pelloux, Hervé; Yera, Hélène; Bastien, Patrick

    2010-01-01

    Although screening for maternal toxoplasmic seroconversion during pregnancy is based on immunodiagnostic assays, the diagnosis of clinically relevant toxoplasmosis greatly relies upon molecular methods. A problem is that this molecular diagnosis is subject to variation of performances, mainly due to a large diversity of PCR methods and primers and the lack of standardization. The present multicentric prospective study, involving eight laboratories proficient in the molecular prenatal diagnosis of toxoplasmosis, was a first step toward the harmonization of this diagnosis among university hospitals in France. Its aim was to compare the analytical performances of different PCR protocols used for Toxoplasma detection. Each center extracted the same concentrated Toxoplasma gondii suspension and tested serial dilutions of the DNA using its own assays. Differences in analytical sensitivities were observed between assays, particularly at low parasite concentrations (≤2 T. gondii genomes per reaction tube), with “performance scores” differing by a 20-fold factor among laboratories. Our data stress the fact that differences do exist in the performances of molecular assays in spite of expertise in the matter; we propose that laboratories work toward a detection threshold defined for a best sensitivity of this diagnosis. Moreover, on the one hand, intralaboratory comparisons confirmed previous studies showing that rep529 is a more adequate DNA target for this diagnosis than the widely used B1 gene. But, on the other hand, interlaboratory comparisons showed differences that appear independent of the target, primers, or technology and that hence rely essentially on proficiency and care in the optimization of PCR conditions. PMID:20610670

  15. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  16. What REALLY Works: Optimizing Classroom Discussions to Promote Comprehension and Critical-Analytic Thinking

    ERIC Educational Resources Information Center

    Murphy, P. Karen; Firetto, Carla M.; Wei, Liwei; Li, Mengyi; Croninger, Rachel M. V.

    2016-01-01

    Many American students struggle to perform even basic comprehension of text, such as locating information, determining the main idea, or supporting details of a story. Even more students are inadequately prepared to complete more complex tasks, such as critically or analytically interpreting information in text or making reasoned decisions from…

  17. Development and optimization of an energy-regenerative suspension system under stochastic road excitation

    NASA Astrophysics Data System (ADS)

    Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad

    2015-11-01

    In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.

  18. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  19. A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.

    PubMed

    Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie

    2018-01-01

    In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Nine-analyte detection using an array-based biosensor

    NASA Technical Reports Server (NTRS)

    Taitt, Chris Rowe; Anderson, George P.; Lingerfelt, Brian M.; Feldstein, s. Mark. J.; Ligler, Frances S.

    2002-01-01

    A fluorescence-based multianalyte immunosensor has been developed for simultaneous analysis of multiple samples. While the standard 6 x 6 format of the array sensor has been used to analyze six samples for six different analytes, this same format has the potential to allow a single sample to be tested for 36 different agents. The method described herein demonstrates proof of principle that the number of analytes detectable using a single array can be increased simply by using complementary mixtures of capture and tracer antibodies. Mixtures were optimized to allow detection of closely related analytes without significant cross-reactivity. Following this facile modification of patterning and assay procedures, the following nine targets could be detected in a single 3 x 3 array: Staphylococcal enterotoxin B, ricin, cholera toxin, Bacillus anthracis Sterne, Bacillus globigii, Francisella tularensis LVS, Yersiniapestis F1 antigen, MS2 coliphage, and Salmonella typhimurium. This work maximizes the efficiency and utility of the described array technology, increasing only reagent usage and cost; production and fabrication costs are not affected.

  1. NHEXAS PHASE I ARIZONA STUDY--PESTICIDES IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dermal Wipes data set contains analytical results for measurements of up to 3 pesticides in 177 dermal wipe samples over 177 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The Derma...

  2. On optimization of energy harvesting from base-excited vibration

    NASA Astrophysics Data System (ADS)

    Tai, Wei-Che; Zuo, Lei

    2017-12-01

    This paper re-examines and clarifies the long-believed optimization conditions of electromagnetic and piezoelectric energy harvesting from base-excited vibration. In terms of electromagnetic energy harvesting, it is typically believed that the maximum power is achieved when the excitation frequency and electrical damping equal the natural frequency and mechanical damping of the mechanical system respectively. We will show that this optimization condition is only valid when the acceleration amplitude of base excitation is constant and an approximation for small mechanical damping when the excitation displacement amplitude is constant. To this end, a two-variable optimization analysis, involving the normalized excitation frequency and electrical damping ratio, is performed to derive the exact optimization condition of each case. When the excitation displacement amplitude is constant, we analytically show that, in contrast to the long-believed optimization condition, the optimal excitation frequency and electrical damping are always larger than the natural frequency and mechanical damping ratio respectively. In particular, when the mechanical damping ratio exceeds a critical value, the optimization condition is no longer valid. Instead, the average power generally increases as the excitation frequency and electrical damping ratio increase. Furthermore, the optimization analysis is extended to consider parasitic electrical losses, which also shows different results when compared with existing literature. When the excitation acceleration amplitude is constant, on the other hand, the exact optimization condition is identical to the long-believed one. In terms of piezoelectric energy harvesting, it is commonly believed that the optimal power efficiency is achieved when the excitation and the short or open circuit frequency of the harvester are equal. Via a similar two-variable optimization analysis, we analytically show that the optimal excitation frequency depends on the

  3. Optomechanical study and optimization of cantilever plate dynamics

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1995-06-01

    Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.

  4. Laser-induced fluorescence detection of lead atoms in a laser-induced plasma: An experimental analytical optimization study

    NASA Astrophysics Data System (ADS)

    Laville, Stéphane; Goueguel, Christian; Loudyi, Hakim; Vidal, François; Chaker, Mohamed; Sabsabi, Mohamad

    2009-04-01

    The combination of the laser-induced breakdown spectroscopy (LIBS) and laser-induced fluorescence (LIF) techniques was investigated to improve the limit of detection (LoD) of trace elements in solid matrices. The influence of the main experimental parameters on the LIF signal, namely the ablation fluence, the excitation energy, and the inter-pulse delay, was studied experimentally and a discussion of the results was presented. For illustrative purpose we considered detection of lead in brass samples. The plasma was produced by a Q-switched Nd:YAG laser and then re-excited by a nanosecond Optical Parametric Oscillator (OPO) laser. The experiments were performed in air at atmospheric pressure. We found out that the optimal conditions were obtained for our experimental set-up using relatively weak ablation fluence of 2-3 J/cm 2 and an inter-pulse delay of about 5-10 μs. Also, a few tens of microjoules was typically required to maximize the LIF signal. Using the LIBS-LIFS technique, a single-shot LoD for lead of about 1.5 part per million (ppm) was obtained while a value of 0.2 ppm was obtained after accumulating over 100 shots. These values represent an improvement of about two orders of magnitude with respect to LIBS.

  5. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  6. Analytical study of nano-scale logical operations

    NASA Astrophysics Data System (ADS)

    Patra, Moumita; Maiti, Santanu K.

    2018-07-01

    A complete analytical prescription is given to perform three basic (OR, AND, NOT) and two universal (NAND, NOR) logic gates at nano-scale level using simple tailor made geometries. Two different geometries, ring-like and chain-like, are taken into account where in each case the bridging conductor is coupled to a local atomic site through a dangling bond whose site energy can be controlled by means of external gate electrode. The main idea is that when injecting electron energy matches with site energy of local atomic site transmission probability drops exactly to zero, whereas the junction exhibits finite transmission for other energies. Utilizing this prescription we perform logical operations, and, we strongly believe that the proposed results can be verified in laboratory. Finally, we numerically compute two-terminal transmission probability considering general models and the numerical results match exactly well with our analytical findings.

  7. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    PubMed

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  8. NHEXAS PHASE I ARIZONA STUDY--METALS IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dermal Wipes data set contains analytical results for measurements of up to 11 metals in 179 dermal wipe samples over 179 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The sampling per...

  9. A Spectrophotometric Study of the Permanganate-Oxalate Reaction: An Analytical Laboratory Experiment

    ERIC Educational Resources Information Center

    Kalbus, Gene E.; Lieu, Van T.; Kalbus, Lee H.

    2004-01-01

    The spectrophotometric method assists in the study of potassium permanganate-oxalate reaction. Basic analytical techniques and rules are implemented in the experiment, which can also include the examination of other compounds oxidized by permanganate.

  10. Experimental and analytical studies of flow through a ventral and axial exhaust nozzle system for STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Debonis, James R.

    1991-01-01

    Flow through a combined ventral and axial exhaust nozzle system was studied experimentally and analytically. The work is part of an ongoing propulsion technology effort at NASA Lewis Research Center for short takeoff, vertical landing (STOVL) aircraft. The experimental investigation was done on the NASA Lewis Powered Lift Facility. The experiment consisted of performance testing over a range of tailpipe pressure ratios from 1 to 3.2 and flow visualization. The analytical investigation consisted of modeling the same configuration and solving for the flow using the PARC3D computational fluid dynamics program. The comparison of experimental and analytical results was very good. The ventral nozzle performance coefficients obtained from both the experimental and analytical studies agreed within 1.2 percent. The net horizontal thrust of the nozzle system contained a significant reverse thrust component created by the flow overturning in the ventral duct. This component resulted in a low net horizontal thrust coefficient. The experimental and analytical studies showed very good agreement in the internal flow patterns.

  11. Implementation and application of moving average as continuous analytical quality control instrument demonstrated for 24 routine chemistry assays.

    PubMed

    Rossum, Huub H van; Kemperman, Hans

    2017-07-26

    General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.

  12. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    PubMed Central

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  13. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    PubMed

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  14. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  15. Enzyme Biosensors for Biomedical Applications: Strategies for Safeguarding Analytical Performances in Biological Fluids

    PubMed Central

    Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea

    2016-01-01

    Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001

  16. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  17. Piezoresistive Cantilever Performance—Part II: Optimization

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.

    2010-01-01

    Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323

  18. A New Analytic Framework for Moderation Analysis --- Moving Beyond Analytic Interactions

    PubMed Central

    Tang, Wan; Yu, Qin; Crits-Christoph, Paul; Tu, Xin M.

    2009-01-01

    Conceptually, a moderator is a variable that modifies the effect of a predictor on a response. Analytically, a common approach as used in most moderation analyses is to add analytic interactions involving the predictor and moderator in the form of cross-variable products and test the significance of such terms. The narrow scope of such a procedure is inconsistent with the broader conceptual definition of moderation, leading to confusion in interpretation of study findings. In this paper, we develop a new approach to the analytic procedure that is consistent with the concept of moderation. The proposed framework defines moderation as a process that modifies an existing relationship between the predictor and the outcome, rather than simply a test of a predictor by moderator interaction. The approach is illustrated with data from a real study. PMID:20161453

  19. Analytical study of the critical behavior of the nonlinear pendulum

    NASA Astrophysics Data System (ADS)

    Lima, F. M. S.

    2010-11-01

    The dynamics of a simple pendulum consisting of a small bob and a massless rigid rod has three possible regimes depending on its total energy E: Oscillatory (when E is not enough for the pendulum to reach the top position), "perpetual ascent" when E is exactly the energy needed to reach the top, and nonoscillatory for greater energies. In the latter regime, the pendulum rotates periodically without velocity inversions. In contrast to the oscillatory regime, for which an exact analytic solution is known, the other two regimes are usually studied by solving the equation of motion numerically. By applying conservation of energy, I derive exact analytical solutions to both the perpetual ascent and nonoscillatory regimes and an exact expression for the pendulum period in the nonoscillatory regime. Based on Cromer's approximation for the large-angle pendulum period, I find a simple approximate expression for the decrease of the period with the initial velocity in the nonoscillatory regime, valid near the critical velocity. This expression is used to study the critical slowing down, which is observed near the transition between the oscillatory and nonoscillatory regimes.

  20. Study optimizes gas lift in Gulf of Suez field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Waly, A.A.; Darwish, T.A.; Osman Salama, A.

    1996-06-24

    A study using PVT data combined with fluid and multiphase flow correlations optimized gas lift in the Ramadan field, Nubia C, oil wells, in the Gulf of Suez. Selection of appropriate correlations followed by multiphase flow calculations at various points of injection (POI) were the first steps in the study. After determining the POI for each well from actual pressure and temperature surveys, the study constructed lift gas performance curves for each well. Actual and optimum operating conditions were compared to determine the optimal gas lift. The study indicated a net 2,115 bo/d could be gained from implementing its recommendations.more » The actual net oil gained as a result of this optimization and injected gas reallocation was 2,024 bo/d. The paper discusses the Ramadan field, fluid properties, multiphase flow, production optimization, and results.« less

  1. Spin bearing retainer design optimization

    NASA Technical Reports Server (NTRS)

    Boesiger, Edward A.; Warner, Mark H.

    1991-01-01

    The dynamics behavior of spin bearings for momentum wheels (control-moment gyroscope, reaction wheel assembly) is critical to satellite stability and life. Repeated bearing retainer instabilities hasten lubricant deterioration and can lead to premature bearing failure and/or unacceptable vibration. These instabilities are typically distinguished by increases in torque, temperature, audible noise, and vibration induced by increases into the bearing cartridge. Ball retainer design can be optimized to minimize these occurrences. A retainer was designed using a previously successful smaller retainer as an example. Analytical methods were then employed to predict its behavior and optimize its configuration.

  2. Shape design of an optimal comfortable pillow based on the analytic hierarchy process method

    PubMed Central

    Liu, Shuo-Fang; Lee, Yann-Long; Liang, Jung-Chin

    2011-01-01

    Objective Few studies have analyzed the shapes of pillows. The purpose of this study was to investigate the relationship between the pillow shape design and subjective comfort level for asymptomatic subjects. Methods Four basic pillow designs factors were selected on the basis of literature review and recombined into 8 configurations for testing the rank of degrees of comfort. The data were analyzed by the analytic hierarchy process method to determine the most comfortable pillow. Results Pillow number 4 was the most comfortable pillow in terms of head, neck, shoulder, height, and overall comfort. The design factors of pillow number 4 were using a combination of standard, cervical, and shoulder pillows. A prototype of this pillow was developed on the basis of the study results for designing future pillow shapes. Conclusions This study investigated the comfort level of particular users and redesign features of a pillow. A deconstruction analysis would simplify the process of determining the most comfortable pillow design and aid designers in designing pillows for groups. PMID:22654680

  3. A study of optical design and optimization of laser optics

    NASA Astrophysics Data System (ADS)

    Tsai, C.-M.; Fang, Yi-Chin

    2013-09-01

    This paper propose a study of optical design of laser beam shaping optics with aspheric surface and application of genetic algorithm (GA) to find the optimal results. Nd: YAG 355 waveband laser flat-top optical system, this study employed the Light tools LDS (least damped square) and the GA of artificial intelligence optimization method to determine the optimal aspheric coefficient and obtain the optimal solution. This study applied the aspheric lens with GA for the flattening of laser beams using collimated laser beam light, aspheric lenses in order to achieve best results.

  4. Energy-optimal electrical excitation of nerve fibers.

    PubMed

    Jezernik, Saso; Morari, Manfred

    2005-04-01

    We derive, based on an analytical nerve membrane model and optimal control theory of dynamical systems, an energy-optimal stimulation current waveform for electrical excitation of nerve fibers. Optimal stimulation waveforms for nonleaky and leaky membranes are calculated. The case with a leaky membrane is a realistic case. Finally, we compare the waveforms and energies necessary for excitation of a leaky membrane in the case where the stimulation waveform is a square-wave current pulse, and in the case of energy-optimal stimulation. The optimal stimulation waveform is an exponentially rising waveform and necessitates considerably less energy to excite the nerve than a square-wave pulse (especially true for larger pulse durations). The described theoretical results can lead to drastically increased battery lifetime and/or decreased energy transmission requirements for implanted biomedical systems.

  5. Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect

    NASA Astrophysics Data System (ADS)

    Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2018-06-01

    Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.

  6. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    PubMed

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  7. Analytical study of acoustically perturbed Brillouin active magnetized semiconductor plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shukla, Arun, E-mail: arunshuklaujn@gmail.com; Jat, K. L.

    2015-07-31

    An analytical study of acoustically perturbed Brillouin active magnetized semiconductor plasma has been reported. In the present analytical investigation, the lattice displacement, acousto-optical polarization, susceptibility, acousto-optical gain constant arising due to the induced nonlinear current density and acousto-optical process are deduced in an acoustically perturbed Brillouin active magnetized semiconductor plasma using the hydrodynamical model of plasma and coupled mode scheme. The influence of wave number and magnetic field has been explored. The analysis has been applied to centrosymmetric crystal. Numerical estimates are made for n-type InSb crystal duly irradiated by a frequency doubled 10.6 µm CO{sub 2} laser. It is foundmore » that lattice displacement, susceptibility and acousto-optical gain increase linearly with incident wave number and applied dc magnetic field, while decrease with scattering angle. The gain also increases with electric amplitude of incident laser beam. Results are found to be well in agreement with available literature.« less

  8. Analytical study on the self-healing property of Bessel beam

    NASA Astrophysics Data System (ADS)

    Chu, X.

    2012-10-01

    With the help of Babinet principle, an analytical expression for the self-healing of Bessel beam is derived by using the Gaussian absorption function to describe the obstacle. Based on the analytical expression, the self-healing properties of Bessel beam are studied. It shows that Bessel beam has the ability to reconstruct its beam shape disturbed by an obstacle. However, during the self-healing process, not only the intensity of the beam behind the obstacle but also the other part will be affected by the obstruction. Meanwhile, the highlight spot, which intensity is larger than that without the obstacle will appear, and the size and strength of the highlight spot is determined by the size of the obstacle. From the change of Poynting vector and Babinet principle, the physical interpretations for the self-healing ability, the effects of the obstruction on the other part and the appearance of highlight spot are given.

  9. A European multicenter study on the analytical performance of the VERIS HBV assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Izopet, Jacques; Lombardi, Alessandra; Mancon, Alessandro; Marcos, Maria Angeles; Sauné, Karine; O Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel

    Hepatitis B viral load monitoring is an essential part of managing patients with chronic Hepatits B infection. Beckman Coulter has developed the VERIS HBV Assay for use on the fully automated Beckman Coulter DxN VERIS Molecular Diagnostics System. 1 OBJECTIVES: To evaluate the analytical performance of the VERIS HBV Assay at multiple European virology laboratories. Precision, analytical sensitivity, negative sample performance, linearity and performance with major HBV genotypes/subtypes for the VERIS HBV Assay was evaluated. Precision showed an SD of 0.15 log 10 IU/mL or less for each level tested. Analytical sensitivity determined by probit analysis was between 6.8-8.0 IU/mL. Clinical specificity on 90 unique patient samples was 100.0%. Performance with 754 negative samples demonstrated 100.0% not detected results, and a carryover study showed no cross contamination. Linearity using clinical samples was shown from 1.23-8.23 log 10 IU/mL and the assay detected and showed linearity with major HBV genotypes/subtypes. The VERIS HBV Assay demonstrated comparable analytical performance to other currently marketed assays for HBV DNA monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Decision support for environmental management of industrial non-hazardous secondary materials: New analytical methods combined with simulation and optimization modeling.

    PubMed

    Little, Keith W; Koralegedara, Nadeesha H; Northeim, Coleen M; Al-Abed, Souhail R

    2017-07-01

    Non-hazardous solid materials from industrial processes, once regarded as waste and disposed in landfills, offer numerous environmental and economic advantages when put to beneficial uses (BUs). Proper management of these industrial non-hazardous secondary materials (INSM) requires estimates of their probable environmental impacts among disposal as well as BU options. The U.S. Environmental Protection Agency (EPA) has recently approved new analytical methods (EPA Methods 1313-1316) to assess leachability of constituents of potential concern in these materials. These new methods are more realistic for many disposal and BU options than historical methods, such as the toxicity characteristic leaching protocol. Experimental data from these new methods are used to parameterize a chemical fate and transport (F&T) model to simulate long-term environmental releases from flue gas desulfurization gypsum (FGDG) when disposed of in an industrial landfill or beneficially used as an agricultural soil amendment. The F&T model is also coupled with optimization algorithms, the Beneficial Use Decision Support System (BUDSS), under development by EPA to enhance INSM management. Published by Elsevier Ltd.

  11. Nuclear and atomic analytical techniques in environmental studies in South America.

    PubMed

    Paschoa, A S

    1990-01-01

    The use of nuclear analytical techniques for environmental studies in South America is selectively reviewed since the time of earlier works of Lattes with cosmic rays until the recent applications of the PIXE (particle-induced X-ray emission) technique to study air pollution problems in large cities, such as São Paulo and Rio de Janeiro. The studies on natural radioactivity and fallout from nuclear weapons in South America are briefly examined.

  12. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  13. Analytical Quality by Design Approach in RP-HPLC Method Development for the Assay of Etofenamate in Dosage Forms

    PubMed Central

    Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.

    2015-01-01

    By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704

  14. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  15. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  16. Research on Collection System Optimal Design of Wind Farm with Obstacles

    NASA Astrophysics Data System (ADS)

    Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.

    2017-05-01

    To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.

  17. Case study on impact performance optimization of hydraulic breakers.

    PubMed

    Noh, Dae-Kyung; Kang, Young-Ky; Cho, Jae-Sang; Jang, Joo-Sup

    2016-01-01

    In order to expand the range of activities of an excavator, attachments, such as hydraulic breakers have been developed to be applied to buckets. However, it is very difficult to predict the dynamic behavior of hydraulic impact devices such as breakers because of high non-linearity. Thus, the purpose of this study is to optimize the impact performance of hydraulic breakers. The ultimate goal of the optimization is to increase the impact energy and impact frequency and to reduce the pressure pulsation of the supply and return lines. The optimization results indicated that the four parameters used to optimize the impact performance of the breaker showed considerable improvement over the results reported in the literature. A test was also conducted and the results were compared with those obtained through optimization in order to verify the optimization results. The comparison showed an average relative error of 8.24 %, which seems to be in good agreement. The results of this study can be used to optimize the impact performance of hydraulic impact devices such as breakers, thus facilitating its application to excavators and increasing the range of activities of an excavator.

  18. Analytical Study on Multi-Tier 5G Heterogeneous Small Cell Networks: Coverage Performance and Energy Efficiency.

    PubMed

    Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-11-04

    In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells' deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells' deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency.

  19. Analytical Study on Multi-Tier 5G Heterogeneous Small Cell Networks: Coverage Performance and Energy Efficiency

    PubMed Central

    Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells’ deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells’ deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency. PMID:27827917

  20. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Determination of serum albumin, analytical challenges: a French multicenter study.

    PubMed

    Rossary, Adrien; Blondé-Cynober, Françoise; Bastard, Jean-Philippe; Beauvieux, Marie-Christine; Beyne, Pascale; Drai, Jocelyne; Lombard, Christine; Anglard, Ingrid; Aussel, Christian; Claeyssens, Sophie; Vasson, Marie-Paule

    2017-06-01

    Among the biological markers of morbidity and mortality, albumin holds a key place in the range of criteria used by the High Authority for Health (HAS) for the assessment of malnutrition and the coding of information system medicalization program (PMSI). If the principle of quantification methods have not changed in recent years, the dispersion of external evaluations of the quality (EEQ) data shows that the standardization using the certified reference material (CRM) 470 is not optimal. The aim of this multicenter study involving 7 sites, conducted by a working group of the French Society of Clinical Biology (SFBC), was to assess whether the albuminemia values depend on the analytical system used. The albumin from plasma (n=30) and serum (n=8) pools was quantified by 5 different methods [bromocresol green (VBC) and bromocresol purple (PBC) colorimetry, immunoturbidimetry (IT), immunonephelometry (IN) and capillary electrophoresis (CE)] using 12 analyzers. Bland and Altman's test evaluated the difference between the results obtained by the different methods. For example, a difference as high as 13 g/L was observed for the same sample between the methods (p <0.001) in the concentration range of 30 to 35 g/L. The VBC overestimates albumin across the range of values tested compared to PBC (p <0.05). PBC method gives similar results to IN for values lower than 40 g/L. For IT methods, one of the technical/analyzer tandem underestimates the albumin values inducing a difference of performance between the immunoprecipitation methods (IT vs IN, p <0.05). Although, the albumin results are related to the technical/analyzer tandem used. This variability is usually not taken into account by the clinician. Thus, clinicians and biologists have to be aware and have to check, depending on the method used, the albumin thresholds identified as risk factors for complications related to malnutrition and PMSI coding.

  2. Predictive classification of self-paced upper-limb analytical movements with EEG.

    PubMed

    Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L

    2015-11-01

    The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.

  3. Geospatial Analytics in Retail Site Selection and Sales Prediction.

    PubMed

    Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali

    2018-03-01

    Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.

  4. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  5. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  6. Can neutral analytes be concentrated by transient isotachophoresis in micellar electrokinetic chromatography and how much?

    PubMed

    Matczuk, Magdalena; Foteeva, Lidia S; Jarosz, Maciej; Galanski, Markus; Keppler, Bernhard K; Hirokawa, Takeshi; Timerbaev, Andrei R

    2014-06-06

    Transient isotachophoresis (tITP) is a versatile sample preconcentration technique that uses ITP to focus electrically charged analytes at the initial stage of CE analysis. However, according to the ruling principle of tITP, uncharged analytes are beyond its capacity while being separated and detected by micellar electrokinetic chromatography (MEKC). On the other hand, when these are charged micelles that undergo the tITP focusing, one can anticipate the concentration effect, resulting from the formation of transient micellar stack at moving sample/background electrolyte (BGE) boundary, which increasingly accumulates the analytes. This work expands the enrichment potential of tITP for MEKC by demonstrating the quantitative analysis of uncharged metal-based drugs from highly saline samples and introducing to the BGE solution anionic surfactants and buffer (terminating) co-ions of different mobility and concentration to optimize performance. Metallodrugs of assorted lipophilicity were chosen so as to explore whether their varying affinity toward micelles plays the role. In addition to altering the sample and BGE composition, optimization of the detection capability was achieved due to fine-tuning operational variables such as sample volume, separation voltage and pressure, etc. The results of optimization trials shed light on the mechanism of micellar tITP and render effective determination of selected drugs in human urine, with practical limits of detection using conventional UV detector. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Optimal consensus algorithm integrated with obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Wang, Jianan; Xin, Ming

    2013-01-01

    This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.

  8. Incorporating Aptamers in the Multiple Analyte Profiling Assays (xMAP): Detection of C-Reactive Protein.

    PubMed

    Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio

    2017-01-01

    Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.

  9. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    NASA Astrophysics Data System (ADS)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  10. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  11. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  12. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  13. Converging Towards the Optimal Path to Extinction

    DTIC Science & Technology

    2011-01-01

    the reproductive rate R0 should be greater than but very close to 1. However, most real diseases have R0 larger than 1.5, which translates into a...can analytically find an expression for the action along the optimal path. The expression for the action is a function of k and the reproductive number...the optimal path for a range of values of the reproductive number R0. In contrast to the prior two examples, here the action must be computed

  14. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  15. The Challenge of Developing a Universal Case Conceptualization for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Bonow, Jordan T.; Maragakis, Alexandros; Follette, William C.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) targets a client's interpersonal behavior for change with the goal of improving his or her quality of life. One question guiding FAP case conceptualization is, "What interpersonal behavioral repertoires will allow a specific client to function optimally?" Previous FAP writings have suggested that a therapist…

  16. Measuring myokines with cardiovascular functions: pre-analytical variables affecting the analytical output.

    PubMed

    Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe

    2017-08-01

    In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.

  17. Web Analytics

    EPA Pesticide Factsheets

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  18. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  19. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  20. Directed transport by surface chemical potential gradients for enhancing analyte collection in nanoscale sensors.

    PubMed

    Sitt, Amit; Hess, Henry

    2015-05-13

    Nanoscale detectors hold great promise for single molecule detection and the analysis of small volumes of dilute samples. However, the probability of an analyte reaching the nanosensor in a dilute solution is extremely low due to the sensor's small size. Here, we examine the use of a chemical potential gradient along a surface to accelerate analyte capture by nanoscale sensors. Utilizing a simple model for transport induced by surface binding energy gradients, we study the effect of the gradient on the efficiency of collecting nanoparticles and single and double stranded DNA. The results indicate that chemical potential gradients along a surface can lead to an acceleration of analyte capture by several orders of magnitude compared to direct collection from the solution. The improvement in collection is limited to a relatively narrow window of gradient slopes, and its extent strongly depends on the size of the gradient patch. Our model allows the optimization of gradient layouts and sheds light on the fundamental characteristics of chemical potential gradient induced transport.

  1. Developing Guidelines for Assessing Visual Analytics Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    2011-07-01

    In this paper, we develop guidelines for evaluating visual analytic environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We then looked at guidelines developed by researchers in various domainsmore » and synthesized these into an initial set for use by others in the community. In a second part of the user study, we looked at guidelines for a new aspect of visual analytic systems – the generation of reports. Future visual analytic systems have been challenged to help analysts generate their reports. In our study we worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 Based on these efforts, we produced some initial guidelines for evaluating visual analytic environment and for evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope because of the type of tasks for which the visual analytic systems used in the studies in this paper were designed. More research and refinement is needed by the Visual Analytics Community to provide additional evaluation guidelines for different types of visual analytic environments.« less

  2. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    NASA Astrophysics Data System (ADS)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  3. Analytic solution of magnetic induction distribution of ideal hollow spherical field sources

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-12-01

    The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.

  4. Optimally frugal foraging

    NASA Astrophysics Data System (ADS)

    Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.

    2018-02-01

    We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.

  5. Strategies for global optimization in photonics design.

    PubMed

    Vukovic, Ana; Sewell, Phillip; Benson, Trevor M

    2010-10-01

    This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.

  6. Modeling of dispersed-drug delivery from planar polymeric systems: optimizing analytical solutions.

    PubMed

    Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A; Cabrera, María I; Grau, Ricardo J A

    2010-11-15

    Analytical solutions for the case of controlled dispersed-drug release from planar non-erodible polymeric matrices, based on Refined Integral Method, are presented. A new adjusting equation is used for the dissolved drug concentration profile in the depletion zone. The set of equations match the available exact solution. In order to illustrate the usefulness of this model, comparisons with experimental profiles reported in the literature are presented. The obtained results show that the model can be employed in a broad range of applicability. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Improving Sample Distribution Homogeneity in Three-Dimensional Microfluidic Paper-Based Analytical Devices by Rational Device Design.

    PubMed

    Morbioli, Giorgio Gianini; Mazzu-Nascimento, Thiago; Milan, Luis Aparecido; Stockton, Amanda M; Carrilho, Emanuel

    2017-05-02

    Paper-based devices are a portable, user-friendly, and affordable technology that is one of the best analytical tools for inexpensive diagnostic devices. Three-dimensional microfluidic paper-based analytical devices (3D-μPADs) are an evolution of single layer devices and they permit effective sample dispersion, individual layer treatment, and multiplex analytical assays. Here, we present the rational design of a wax-printed 3D-μPAD that enables more homogeneous permeation of fluids along the cellulose matrix than other existing designs in the literature. Moreover, we show the importance of the rational design of channels on these devices using glucose oxidase, peroxidase, and 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) reactions. We present an alternative method for layer stacking using a magnetic apparatus, which facilitates fluidic dispersion and improves the reproducibility of tests performed on 3D-μPADs. We also provide the optimized designs for printing, facilitating further studies using 3D-μPADs.

  8. Stochastic Optimal Control via Bellman's Principle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Sun, Jian Q.

    2003-01-01

    This paper presents a method for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's Principle of optimality, the Gaussian closure and the Short-time Gaussian approximation. Examples include a system with a state-dependent diffusion term, a system in which the infinite hierarchy of moment equations cannot be analytically closed, and an impact system with a elastic boundary. The uncontrolled and controlled dynamics are studied by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method. In this fashion, both the transient and stationary controlled responses are evaluated. The results show excellent control performances.

  9. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  10. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  11. Time-optimal thermalization of single-mode Gaussian states

    NASA Astrophysics Data System (ADS)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  12. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  13. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  14. Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.

    PubMed

    Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R

    2015-11-05

    Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Performance enhancement of Pt/TiO2/Si UV-photodetector by optimizing light trapping capability and interdigitated electrodes geometry

    NASA Astrophysics Data System (ADS)

    Bencherif, H.; Djeffal, F.; Ferhati, H.

    2016-09-01

    This paper presents a hybrid approach based on an analytical and metaheuristic investigation to study the impact of the interdigitated electrodes engineering on both speed and optical performance of an Interdigitated Metal-Semiconductor-Metal Ultraviolet Photodetector (IMSM-UV-PD). In this context, analytical models regarding the speed and optical performance have been developed and validated by experimental results, where a good agreement has been recorded. Moreover, the developed analytical models have been used as objective functions to determine the optimized design parameters, including the interdigit configuration effect, via a Multi-Objective Genetic Algorithm (MOGA). The ultimate goal of the proposed hybrid approach is to identify the optimal design parameters associated with the maximum of electrical and optical device performance. The optimized IMSM-PD not only reveals superior performance in terms of photocurrent and response time, but also illustrates higher optical reliability against the optical losses due to the active area shadowing effects. The advantages offered by the proposed design methodology suggest the possibility to overcome the most challenging problem with the communication speed and power requirements of the UV optical interconnect: high derived current and commutation speed in the UV receiver.

  16. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  17. NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...

  18. Analytical thermal model for end-pumped solid-state lasers

    NASA Astrophysics Data System (ADS)

    Cini, L.; Mackenzie, J. I.

    2017-12-01

    Fundamentally power-limited by thermal effects, the design challenge for end-pumped "bulk" solid-state lasers depends upon knowledge of the temperature gradients within the gain medium. We have developed analytical expressions that can be used to model the temperature distribution and thermal-lens power in end-pumped solid-state lasers. Enabled by the inclusion of a temperature-dependent thermal conductivity, applicable from cryogenic to elevated temperatures, typical pumping distributions are explored and the results compared with accepted models. Key insights are gained through these analytical expressions, such as the dependence of the peak temperature rise in function of the boundary thermal conductance to the heat sink. Our generalized expressions provide simple and time-efficient tools for parametric optimization of the heat distribution in the gain medium based upon the material and pumping constraints.

  19. A study of optimization techniques in HDR brachytherapy for the prostate

    NASA Astrophysics Data System (ADS)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives

  20. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  1. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    PubMed

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  2. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    PubMed

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  3. Analytic materials

    PubMed Central

    2016-01-01

    The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer p. If p takes its maximum value, then we have a complete analytic material. Otherwise, it is incomplete analytic material of rank p. For two-dimensional materials, further progress can be made in the identification of analytic materials by using the well-known fact that a 90° rotation applied to a divergence-free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations. PMID:27956882

  4. RLV Turbine Performance Optimization

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    A task was developed at NASA/Marshall Space Flight Center (MSFC) to improve turbine aerodynamic performance through the application of advanced design and analysis tools. There are four major objectives of this task: 1) to develop, enhance, and integrate advanced turbine aerodynamic design and analysis tools; 2) to develop the methodology for application of the analytical techniques; 3) to demonstrate the benefits of the advanced turbine design procedure through its application to a relevant turbine design point; and 4) to verify the optimized design and analysis with testing. Final results of the preliminary design and the results of the two-dimensional (2D) detailed design of the first-stage vane of a supersonic turbine suitable for a reusable launch vehicle (R-LV) are presented. Analytical techniques for obtaining the results are also discussed.

  5. Influence versus intent for predictive analytics in situation awareness

    NASA Astrophysics Data System (ADS)

    Cui, Biru; Yang, Shanchieh J.; Kadar, Ivan

    2013-05-01

    Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions. The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of `influence' has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or `attitude' using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their `impact' and `threat' can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats. This work proposes an integrated framework that jointly assesses adversarial actors' influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism

  6. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  7. Cryogenic parallel, single phase flows: an analytical approach

    NASA Astrophysics Data System (ADS)

    Eichhorn, R.

    2017-02-01

    Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.

  8. Layer-switching cost and optimality in information spreading on multiplex networks

    PubMed Central

    Min, Byungjoon; Gwak, Sang-Hwan; Lee, Nanoom; Goh, K. -I.

    2016-01-01

    We study a model of information spreading on multiplex networks, in which agents interact through multiple interaction channels (layers), say online vs. offline communication layers, subject to layer-switching cost for transmissions across different interaction layers. The model is characterized by the layer-wise path-dependent transmissibility over a contact, that is dynamically determined dependently on both incoming and outgoing transmission layers. We formulate an analytical framework to deal with such path-dependent transmissibility and demonstrate the nontrivial interplay between the multiplexity and spreading dynamics, including optimality. It is shown that the epidemic threshold and prevalence respond to the layer-switching cost non-monotonically and that the optimal conditions can change in abrupt non-analytic ways, depending also on the densities of network layers and the type of seed infections. Our results elucidate the essential role of multiplexity that its explicit consideration should be crucial for realistic modeling and prediction of spreading phenomena on multiplex social networks in an era of ever-diversifying social interaction layers. PMID:26887527

  9. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  10. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  11. An Analytical Study of Prostate-Specific Antigen Dynamics.

    PubMed

    Esteban, Ernesto P; Deliz, Giovanni; Rivera-Rodriguez, Jaileen; Laureano, Stephanie M

    2016-01-01

    The purpose of this research is to carry out a quantitative study of prostate-specific antigen dynamics for patients with prostatic diseases, such as benign prostatic hyperplasia (BPH) and localized prostate cancer (LPC). The proposed PSA mathematical model was implemented using clinical data of 218 Japanese patients with histological proven BPH and 147 Japanese patients with LPC (stages T2a and T2b). For prostatic diseases (BPH and LPC) a nonlinear equation was obtained and solved in a close form to predict PSA progression with patients' age. The general solution describes PSA dynamics for patients with both diseases LPC and BPH. Particular solutions allow studying PSA dynamics for patients with BPH or LPC. Analytical solutions have been obtained and solved in a close form to develop nomograms for a better understanding of PSA dynamics in patients with BPH and LPC. This study may be useful to improve the diagnostic and prognosis of prostatic diseases.

  12. Analytical transmission cross-coefficients for pink beam X-ray microscopy based on compound refractive lenses.

    PubMed

    Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H

    2018-01-01

    Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Optimism and Cause-Specific Mortality: A Prospective Cohort Study

    PubMed Central

    Kim, Eric S.; Hagan, Kaitlin A.; Grodstein, Francine; DeMeo, Dawn L.; De Vivo, Immaculata; Kubzansky, Laura D.

    2017-01-01

    Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses’ Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. PMID:27927621

  14. Advanced analytical electron microscopy for alkali-ion batteries

    DOE PAGES

    Qian, Danna; Ma, Cheng; Meng, Ying Shirley; ...

    2015-06-26

    Lithium-ion batteries are a leading candidate for electric vehicle and smart grid applications. However, further optimizations of the energy/power density, coulombic efficiency and cycle life are still needed, and this requires a thorough understanding of the dynamic evolution of each component and their synergistic behaviors during battery operation. With the capability of resolving the structure and chemistry at an atomic resolution, advanced analytical transmission electron microscopy (AEM) is an ideal technique for this task. The present review paper focuses on recent contributions of this important technique to the fundamental understanding of the electrochemical processes of battery materials. A detailed reviewmore » of both static (ex situ) and real-time (in situ) studies will be given, and issues that still need to be addressed will be discussed.« less

  15. Analytic thinking promotes religious disbelief.

    PubMed

    Gervais, Will M; Norenzayan, Ara

    2012-04-27

    Scientific interest in the cognitive underpinnings of religious belief has grown in recent years. However, to date, little experimental research has focused on the cognitive processes that may promote religious disbelief. The present studies apply a dual-process model of cognitive processing to this problem, testing the hypothesis that analytic processing promotes religious disbelief. Individual differences in the tendency to analytically override initially flawed intuitions in reasoning were associated with increased religious disbelief. Four additional experiments provided evidence of causation, as subtle manipulations known to trigger analytic processing also encouraged religious disbelief. Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.

  16. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  17. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  18. Optimization of supersonic axisymmetric nozzles with a center body for aerospace propulsion

    NASA Astrophysics Data System (ADS)

    Davidenko, D. M.; Eude, Y.; Falempin, F.

    2011-10-01

    This study is aimed at optimization of axisymmetric nozzles with a center body, which are suitable for thrust engines having an annular duct. To determine the flow conditions and nozzle dimensions, the Vinci rocket engine is chosen as a prototype. The nozzle contours are described by 2nd and 3rd order analytical functions and specified by a set of geometrical parameters. A direct optimization method is used to design maximum thrust nozzle contours. During optimization, the flow of multispecies reactive gas is simulated by an Euler code. Several optimized contours have been obtained for the center body diameter ranging from 0.2 to 0.4 m. For these contours, Navier-Stokes (NS) simulations have been performed to take into account viscous effects assuming adiabatic and cooled wall conditions. The paper presents an analysis of factors influencing the nozzle thrust.

  19. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    NASA Astrophysics Data System (ADS)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  20. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    PubMed

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  1. Development of a Suite of Analytical Tools for Energy and Water Infrastructure Knowledge Discovery

    NASA Astrophysics Data System (ADS)

    Morton, A.; Piburn, J.; Stewart, R.; Chandola, V.

    2017-12-01

    Energy and water generation and delivery systems are inherently interconnected. With demand for energy growing, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic, and demographic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This also requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. To address this need, we've developed a suite of analytical tools to support an integrated data driven modeling, analysis, and visualization capability for understanding, designing, and developing efficient local and regional practices related to the energy-water nexus. This work reviews the analytical capabilities available along with a series of case studies designed to demonstrate the potential of these tools for illuminating energy-water nexus solutions and supporting strategic (federal) policy decisions.

  2. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  3. Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L

    2017-11-13

    A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results aremore » then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.« less

  4. Optimizing the Compressive Strength of Strain-Hardenable Stretch-Formed Microtruss Architectures

    NASA Astrophysics Data System (ADS)

    Yu, Bosco; Abu Samk, Khaled; Hibbard, Glenn D.

    2015-05-01

    The mechanical performance of stretch-formed microtrusses is determined by both the internal strut architecture and the accumulated plastic strain during fabrication. The current study addresses the question of optimization, by taking into consideration the interdependency between fabrication path, material properties and architecture. Low carbon steel (AISI1006) and aluminum (AA3003) material systems were investigated experimentally, with good agreement between measured values and the analytical model. The compressive performance of the microtrusses was then optimized on a minimum weight basis under design constraints such as fixed starting sheet thickness and final microtruss height by satisfying the Karush-Kuhn-Tucker condition. The optimization results were summarized as carpet plots in order to meaningfully visualize the interdependency between architecture, microstructural state, and mechanical performance, enabling material and processing path selection.

  5. Analytic cognitive style, not delusional ideation, predicts data gathering in a large beads task study.

    PubMed

    Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max

    2016-07-01

    It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.

  6. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.

  7. Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction

    ERIC Educational Resources Information Center

    Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna

    2015-01-01

    Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…

  8. A thermal vacuum test optimization procedure

    NASA Technical Reports Server (NTRS)

    Kruger, R.; Norris, H. P.

    1979-01-01

    An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.

  9. Optimization of magnet end-winding geometry

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.

    1994-03-01

    A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.

  10. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  11. Optimizing piezoelectric receivers for acoustic power transfer applications

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2018-07-01

    In this paper, we aim to optimize piezoelectric plate receivers for acoustic power transfer applications by analyzing the influence of the losses and of the acoustic boundary conditions. We derive the analytic expressions of the efficiency of the receiver with the optimal electric loads attached, and analyze the maximum efficiency value and its frequency with different loss and acoustic boundary conditions. To validate the analytical expressions that we have derived, we perform experiments in water with composite transducers of different filling fractions, and see that a lower acoustic impedance mismatch can compensate the influence of large dielectric and acoustic losses to achieve a good performance. Finally, we briefly compare the advantages and drawbacks of composite transducers and pure PZT (lead zirconate titanate) plates as acoustic power receivers, and conclude that 1–3 composites can achieve similar efficiency values in low power applications due to their adjustable acoustic impedance.

  12. Optimizing liquid effluent monitoring at a large nuclear complex.

    PubMed

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  13. Multimedia Analysis plus Visual Analytics = Multimedia Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinchor, Nancy; Thomas, James J.; Wong, Pak C.

    2010-10-01

    Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.

  14. Experimental and analytical studies for the NASA carbon fiber risk assessment

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Various experimental and analytical studies performed for the NASA carbon fiber risk assessment program are described with emphasis on carbon fiber characteristics, sensitivity of electrical equipment and components to shorting or arcing by carbon fibers, attenuation effect of carbon fibers on aircraft landing aids, impact of carbon fibers on industrial facilities. A simple method of estimating damage from airborne carbon fibers is presented.

  15. Adequacy of surface analytical tools for studying the tribology of ceramics

    NASA Technical Reports Server (NTRS)

    Sliney, H. E.

    1986-01-01

    Surface analytical tools are very beneficial in tribological studies of ceramics. Traditional methods of optical microscopy, XRD, XRF, and SEM should be combined with newer surface sensitive techniques especially AES and XPS. ISS and SIMS can also be useful in providing additional compositon details. Tunneling microscopy and electron energy loss spectroscopy are less known techniques that may also prove useful.

  16. Analytical Studies of Boundary Layer Generated Aircraft Interior Noise

    NASA Technical Reports Server (NTRS)

    Howe, M. S.; Shah, P. L.

    1997-01-01

    An analysis is made of the "interior noise" produced by high, subsonic turbulent flow over a thin elastic plate partitioned into "panels" by straight edges transverse to the mean flow direction. This configuration models a section of an aircraft fuselage that may be regarded as locally flat. The analytical problem can be solved in closed form to represent the acoustic radiation in terms of prescribed turbulent boundary layer pressure fluctuations. Two cases are considered: (i) the production of sound at an isolated panel edge (i.e., in the approximation in which the correlation between sound and vibrations generated at neighboring edges is neglected), and (ii) the sound generated by a periodic arrangement of identical panels. The latter problem is amenable to exact analytical treatment provided the panel edge conditions are the same for all panels. Detailed predictions of the interior noise depend on a knowledge of the turbulent boundary layer wall pressure spectrum, and are given here in terms of an empirical spectrum proposed by Laganelli and Wolfe. It is expected that these analytical representations of the sound generated by simplified models of fluid-structure interactions can used to validate more general numerical schemes.

  17. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  18. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  19. Experimental and analytical studies of high heat flux components for fusion experimental reactor

    NASA Astrophysics Data System (ADS)

    Araki, Masanori

    1993-03-01

    In this report, the experimental and analytical results concerning the development of plasma facing components of ITER are described. With respect to developing high heat removal structures for the divertor plates, an externally-finned swirl tube was developed based on the results of critical heat flux (CHF) experiments on various tube structures. As the result, the burnout heat flux, which also indicates incident CHF, of 41 (+/-) 1 MW/sq m was achieved in the externally-finned swirl tube. The applicability of existing CHF correlations based on uniform heating conditions was evaluated by comparing the CHF experimental data with the smooth and the externally-finned tubes under one-sided heating condition. As the results, experimentally determined CHF data for straight tube show good agreement, for the externally-finned tube, no existing correlations are available for prediction of the CHF. With respect to the evaluation of the bonds between carbon-based material and heat sink metal, results of brazing tests were compared with the analytical results by three dimensional model with temperature-dependent thermal and mechanical properties. Analytical results showed that residual stresses from brazing can be estimated by the analytical three directional stress values instead of the equivalent stress value applied. In the analytical study on the separatrix sweeping for effectively reducing surface heat fluxes on the divertor plate, thermal response of the divertor plate was analyzed under ITER relevant heat flux conditions and has been tested. As the result, it has been demonstrated that application of the sweeping technique is very effective for improvement in the power handling capability of the divertor plate and that the divertor mock-up has withstood a large number of additional cyclic heat loads.

  20. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  1. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  2. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  3. Analytical method for nitroaromatic explosives in radiologically contaminated soil for ISO/IEC 17025 accreditation

    DOE PAGES

    Boggess, Andrew; Crump, Stephen; Gregory, Clint; ...

    2017-12-06

    Here, unique hazards are presented in the analysis of radiologically contaminated samples. Strenuous safety and security precautions must be in place to protect the analyst, laboratory, and instrumentation used to perform analyses. A validated method has been optimized for the analysis of select nitroaromatic explosives and degradative products using gas chromatography/mass spectrometry via sonication extraction of radiologically contaminated soils, for samples requiring ISO/IEC 17025 laboratory conformance. Target analytes included 2-nitrotoluene, 4-nitrotoluene, 2,6-dinitrotoluene, and 2,4,6-trinitrotoluene, as well as the degradative product 4-amino-2,6-dinitrotoluene. Analytes were extracted from soil in methylene chloride by sonication. Administrative and engineering controls, as well as instrument automationmore » and quality control measures, were utilized to minimize potential human exposure to radiation at all times and at all stages of analysis, from receiving through disposition. Though thermal instability increased uncertainties of these selected compounds, a mean lower quantitative limit of 2.37 µg/mL and mean accuracy of 2.3% relative error and 3.1% relative standard deviation were achieved. Quadratic regression was found to be optimal for calibration of all analytes, with compounds of lower hydrophobicity displaying greater parabolic curve. Blind proficiency testing (PT) of spiked soil samples demonstrated a mean relative error of 9.8%. Matrix spiked analyses of PT samples demonstrated that 99% recovery of target analytes was achieved. To the knowledge of the authors, this represents the first safe, accurate, and reproducible quantitative method for nitroaromatic explosives in soil for specific use on radiologically contaminated samples within the constraints of a nuclear analytical lab.« less

  4. Analytical method for nitroaromatic explosives in radiologically contaminated soil for ISO/IEC 17025 accreditation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggess, Andrew; Crump, Stephen; Gregory, Clint

    Here, unique hazards are presented in the analysis of radiologically contaminated samples. Strenuous safety and security precautions must be in place to protect the analyst, laboratory, and instrumentation used to perform analyses. A validated method has been optimized for the analysis of select nitroaromatic explosives and degradative products using gas chromatography/mass spectrometry via sonication extraction of radiologically contaminated soils, for samples requiring ISO/IEC 17025 laboratory conformance. Target analytes included 2-nitrotoluene, 4-nitrotoluene, 2,6-dinitrotoluene, and 2,4,6-trinitrotoluene, as well as the degradative product 4-amino-2,6-dinitrotoluene. Analytes were extracted from soil in methylene chloride by sonication. Administrative and engineering controls, as well as instrument automationmore » and quality control measures, were utilized to minimize potential human exposure to radiation at all times and at all stages of analysis, from receiving through disposition. Though thermal instability increased uncertainties of these selected compounds, a mean lower quantitative limit of 2.37 µg/mL and mean accuracy of 2.3% relative error and 3.1% relative standard deviation were achieved. Quadratic regression was found to be optimal for calibration of all analytes, with compounds of lower hydrophobicity displaying greater parabolic curve. Blind proficiency testing (PT) of spiked soil samples demonstrated a mean relative error of 9.8%. Matrix spiked analyses of PT samples demonstrated that 99% recovery of target analytes was achieved. To the knowledge of the authors, this represents the first safe, accurate, and reproducible quantitative method for nitroaromatic explosives in soil for specific use on radiologically contaminated samples within the constraints of a nuclear analytical lab.« less

  5. Optimization of dual energy contrast enhanced breast tomosynthesis for improved mammographic lesion detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Saunders, R.; Samei, E.; Badea, C.; Yuan, H.; Ghaghada, K.; Qi, Y.; Hedlund, L. W.; Mukundan, S.

    2008-03-01

    Dual-energy contrast-enhanced breast tomosynthesis has been proposed as a technique to improve the detection of early-stage cancer in young, high-risk women. This study focused on optimizing this technique using computer simulations. The computer simulation used analytical calculations to optimize the signal difference to noise ratio (SdNR) of resulting images from such a technique at constant dose. The optimization included the optimal radiographic technique, optimal distribution of dose between the two single-energy projection images, and the optimal weighting factor for the dual energy subtraction. Importantly, the SdNR included both anatomical and quantum noise sources, as dual energy imaging reduces anatomical noise at the expense of increases in quantum noise. Assuming a tungsten anode, the maximum SdNR at constant dose was achieved for a high energy beam at 49 kVp with 92.5 μm copper filtration and a low energy beam at 49 kVp with 95 μm tin filtration. These analytical calculations were followed by Monte Carlo simulations that included the effects of scattered radiation and detector properties. Finally, the feasibility of this technique was tested in a small animal imaging experiment using a novel iodinated liposomal contrast agent. The results illustrated the utility of dual energy imaging and determined the optimal acquisition parameters for this technique. This work was supported in part by grants from the Komen Foundation (PDF55806), the Cancer Research and Prevention Foundation, and the NIH (NCI R21 CA124584-01). CIVM is a NCRR/NCI National Resource under P41-05959/U24-CA092656.

  6. Study of monopropellants for electrothermal thrusters: Analytical task summary report

    NASA Technical Reports Server (NTRS)

    Kuenzly, J. D.; Grabbi, R.

    1973-01-01

    The feasibility of operating small thrust level electrothermal thrusters is determined with monopropellants other than MIL-grade hydrazine. The work scope includes analytical study, design and fabrication of demonstration thrusters, and an evaluation test program where monopropellants with freezing points lower than MIL-grade hydrazine are evaluated and characterized to determine their applicability to electrothermal thrusters for spacecraft attitude control. Results of propellant chemistry studies and performance analyses indicated that the most promising candidate monopropellants to be investigated are monomethylhydrazine, Aerozine-50, 77% hydrazine-23% hydrazine azide blend, and TRW formulated mixed hydrazine monopropellant (MHM) consisting of 35% hydrazine-50% monomethylhydrazine-15% ammonia.

  7. Exploration Opportunity Search of Near-earth Objects Based on Analytical Gradients

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Cui, Ping-Yuan; Luan, En-Jie

    2008-07-01

    The problem of search of opportunity for the exploration of near-earth minor objects is investigated. For rendezvous missions, the analytical gradients of the performance index with respect to the free parameters are derived using the variational calculus and the theory of state-transition matrix. After generating randomly some initial guesses in the search space, the performance index is optimized, guided by the analytical gradients, leading to the local minimum points representing the potential launch opportunities. This method not only keeps the global-search property of the traditional method, but also avoids the blindness in the latter, thereby increasing greatly the computing speed. Furthermore, with this method, the searching precision could be controlled effectively.

  8. Development of an Analytical Method for Dibutyl Phthalate Determination Using Surrogate Analyte Approach

    PubMed Central

    Farzanehfar, Vahid; Faizi, Mehrdad; Naderi, Nima; Kobarfard, Farzad

    2017-01-01

    Dibutyl phthalate (DBP) is a phthalic acid ester and is widely used in polymeric products to make them more flexible. DBP is found in almost every plastic material and is believed to be persistent in the environment. Various analytical methods have been used to measure DBP in different matrices. Considering the ubiquitous nature of DBP, the most important challenge in DBP analyses is the contamination of even analytical grade organic solvents with this compound and lack of availability of a true blank matrix to construct the calibration line. Standard addition method or using artificial matrices reduce the precision and accuracy of the results. In this study a surrogate analyte approach that is based on using deuterium labeled analyte (DBP-d4) to construct the calibration line was applied to determine DBP in hexane samples. PMID:28496469

  9. Analyzing Security Breaches in the U.S.: A Business Analytics Case-Study

    ERIC Educational Resources Information Center

    Parks, Rachida F.; Adams, Lascelles

    2016-01-01

    This is a real-world applicable case-study and includes background information, functional organization requirements, and real data. Business analytics has been defined as the technologies, skills, and practices needed to iteratively investigate historical performance to gain insight or spot trends. You are asked to utilize/apply critical thinking…

  10. Cogeneration Technology Alternatives Study (CTAS). Volume 2: Analytical approach

    NASA Technical Reports Server (NTRS)

    Gerlaugh, H. E.; Hall, E. W.; Brown, D. H.; Priestley, R. R.; Knightly, W. F.

    1980-01-01

    The use of various advanced energy conversion systems were compared with each other and with current technology systems for their savings in fuel energy, costs, and emissions in individual plants and on a national level. The ground rules established by NASA and assumptions made by the General Electric Company in performing this cogeneration technology alternatives study are presented. The analytical methodology employed is described in detail and is illustrated with numerical examples together with a description of the computer program used in calculating over 7000 energy conversion system-industrial process applications. For Vol. 1, see 80N24797.

  11. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Multivariate optimization and validation of an analytical methodology by RP-HPLC for the determination of losartan potassium in capsules.

    PubMed

    Bonfilio, Rudy; Tarley, César Ricardo Teixeira; Pereira, Gislaine Ribeiro; Salgado, Hérida Regina Nunes; de Araújo, Magali Benjamim

    2009-11-15

    This paper describes the optimization and validation of an analytical methodology for the determination of losartan potassium in capsules by HPLC using 2(5-1) fractional factorial and Doehlert designs. This multivariate approach allows a considerable improvement in chromatographic performance using fewer experiments, without additional cost for columns or other equipment. The HPLC method utilized potassium phosphate buffer (pH 6.2; 58 mmol L(-1))-acetonitrile (65:35, v/v) as the mobile phase, pumped at a flow rate of 1.0 mL min(-1). An octylsilane column (100 mm x 4.6mm i.d., 5 microm) maintained at 35 degrees C was used as the stationary phase. UV detection was performed at 254 nm. The method was validated according to the ICH guidelines, showing accuracy, precision (intra-day relative standard deviation (R.S.D.) and inter-day R.S.D values <2.0%), selectivity, robustness and linearity (r=0.9998) over a concentration range from 30 to 70 mg L(-1) of losartan potassium. The limits of detection and quantification were 0.114 and 0.420 mg L(-1), respectively. The validated method may be used to quantify losartan potassium in capsules and to determine the stability of this drug.

  13. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.

  14. Optimizing the wireless power transfer over MIMO Channels

    NASA Astrophysics Data System (ADS)

    Wiedmann, Karsten; Weber, Tobias

    2017-09-01

    In this paper, the optimization of the power transfer over wireless channels having multiple-inputs and multiple-outputs (MIMO) is studied. Therefore, the transmitter, the receiver and the MIMO channel are modeled as multiports. The power transfer efficiency is described by a Rayleigh quotient, which is a function of the channel's scattering parameters and the incident waves from both transmitter and receiver side. This way, the power transfer efficiency can be maximized analytically by solving a generalized eigenvalue problem, which is deduced from the Rayleigh quotient. As a result, the maximum power transfer efficiency achievable over a given MIMO channel is obtained. This maximum can be used as a performance bound in order to benchmark wireless power transfer systems. Furthermore, the optimal operating point which achieves this maximum will be obtained. The optimal operating point will be described by the complex amplitudes of the optimal incident and reflected waves of the MIMO channel. This supports the design of the optimal transmitter and receiver multiports. The proposed method applies for arbitrary MIMO channels, taking transmitter-side and/or receiver-side cross-couplings in both near- and farfield scenarios into consideration. Special cases are briefly discussed in this paper in order to illustrate the method.

  15. Optimization of Multiple Pathogen Detection Using the TaqMan Array Card: Application for a Population-Based Study of Neonatal Infection

    PubMed Central

    Diaz, Maureen H.; Waller, Jessica L.; Napoliello, Rebecca A.; Islam, Md. Shahidul; Wolff, Bernard J.; Burken, Daniel J.; Holden, Rhiannon L.; Srinivasan, Velusamy; Arvay, Melissa; McGee, Lesley; Oberste, M. Steven; Whitney, Cynthia G.; Schrag, Stephanie J.; Winchell, Jonas M.; Saha, Samir K.

    2013-01-01

    Identification of etiology remains a significant challenge in the diagnosis of infectious diseases, particularly in resource-poor settings. Viral, bacterial, and fungal pathogens, as well as parasites, play a role for many syndromes, and optimizing a single diagnostic system to detect a range of pathogens is challenging. The TaqMan Array Card (TAC) is a multiple-pathogen detection method that has previously been identified as a valuable technique for determining etiology of infections and holds promise for expanded use in clinical microbiology laboratories and surveillance studies. We selected TAC for use in the Aetiology of Neonatal Infection in South Asia (ANISA) study for identifying etiologies of severe disease in neonates in Bangladesh, India, and Pakistan. Here we report optimization of TAC to improve pathogen detection and overcome technical challenges associated with use of this technology in a large-scale surveillance study. Specifically, we increased the number of assay replicates, implemented a more robust RT-qPCR enzyme formulation, and adopted a more efficient method for extraction of total nucleic acid from blood specimens. We also report the development and analytical validation of ten new assays for use in the ANISA study. Based on these data, we revised the study-specific TACs for detection of 22 pathogens in NP/OP swabs and 12 pathogens in blood specimens as well as two control reactions (internal positive control and human nucleic acid control) for each specimen type. The cumulative improvements realized through these optimization studies will benefit ANISA and perhaps other studies utilizing multiple-pathogen detection approaches. These lessons may also contribute to the expansion of TAC technology to the clinical setting. PMID:23805203

  16. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality

  17. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  18. Analytical and experimental study of the acoustics and the flow field characteristics of cavitating self-resonating water jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chahine, G.L.; Genoux, P.F.; Johnson, V.E. Jr.

    1984-09-01

    Waterjet nozzles (STRATOJETS) have been developed which achieve passive structuring of cavitating submerged jets into discrete ring vortices, and which possess cavitation incipient numbers six times higher than obtained with conventional cavitating jet nozzles. In this study we developed analytical and numerical techniques and conducted experimental work to gain an understanding of the basic phenomena involved. The achievements are: (1) a thorough analysis of the acoustic dynamics of the feed pipe to the nozzle; (2) a theory for bubble ring growth and collapse; (3) a numerical model for jet simulation; (4) an experimental observation and analysis of candidate second-generation low-sigmamore » STRATOJETS. From this study we can conclude that intensification of bubble ring collapse and design of highly resonant feed tubes can lead to improved drilling rates. The models here described are excellent tools to analyze the various parameters needed for STRATOJET optimizations. Further analysis is needed to introduce such important factors as viscosity, nozzle-jet interaction, and ring-target interaction, and to develop the jet simulation model to describe the important fine details of the flow field at the nozzle exit.« less

  19. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  20. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention

    PubMed Central

    Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-01-01

    Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928

  1. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.

    PubMed

    Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-09-12

    Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.

  2. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  3. Structural degradation of Thar lignite using MW1 fungal isolate: optimization studies

    USGS Publications Warehouse

    Haider, Rizwan; Ghauri, Muhammad A.; Jones, Elizabeth J.; Orem, William H.; SanFilipo, John R.

    2015-01-01

    Biological degradation of low-rank coals, particularly degradation mediated by fungi, can play an important role in helping us to utilize neglected lignite resources for both fuel and non-fuel applications. Fungal degradation of low-rank coals has already been investigated for the extraction of soil-conditioning agents and the substrates, which could be subjected to subsequent processing for the generation of alternative fuel options, like methane. However, to achieve an efficient degradation process, the fungal isolates must originate from an appropriate coal environment and the degradation process must be optimized. With this in mind, a representative sample from the Thar coalfield (the largest lignite resource of Pakistan) was treated with a fungal strain, MW1, which was previously isolated from a drilled core coal sample. The treatment caused the liberation of organic fractions from the structural matrix of coal. Fungal degradation was optimized, and it showed significant release of organics, with 0.1% glucose concentration and 1% coal loading ratio after an incubation time of 7 days. Analytical investigations revealed the release of complex organic moieties, pertaining to polyaromatic hydrocarbons, and it also helped in predicting structural units present within structure of coal. Such isolates, with enhanced degradation capabilities, can definitely help in exploiting the chemical-feedstock-status of coal.

  4. An optimization framework for measuring spatial access over healthcare networks.

    PubMed

    Li, Zihao; Serban, Nicoleta; Swann, Julie L

    2015-07-17

    Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.

  5. Rational selection of substrates to improve color intensity and uniformity on microfluidic paper-based analytical devices.

    PubMed

    Evans, Elizabeth; Gabriel, Ellen Flávia Moreira; Coltro, Wendell Karlos Tomazelli; Garcia, Carlos D

    2014-05-07

    A systematic investigation was conducted to study the effect of paper type on the analytical performance of a series of microfluidic paper-based analytical devices (μPADs) fabricated using a CO2 laser engraver. Samples included three different grades of Whatman chromatography paper, and three grades of Whatman filter paper. According to the data collected and the characterization performed, different papers offer a wide range of flow rate, thickness, and pore size. After optimizing the channel widths on the μPAD, the focus of this study was directed towards the color intensity and color uniformity formed during a colorimetric enzymatic reaction. According to the results herein described, the type of paper and the volume of reagents dispensed in each detection zone can determine the color intensity and uniformity. Therefore, the objective of this communication is to provide rational guidelines for the selection of paper substrates for the fabrication of μPADs.

  6. Experimental and Analytical Study of Erosive Burning of Solid Propellants

    DTIC Science & Technology

    1981-06-01

    Identity by block number) Experirnert~iI - d analytical, *bdeling studies of the erosive burning ,..solfd propel l;n!t; w’r(, ,conducted at Atlantic Research...is approved for public ’release IAVV AFR 190-12 (Tb). Distribuiiou is unlitited. A. D . HLOSE Z Tecuhtgal Ina’o ’nation Offo icer - 3. Conduct...roughness. 8. Extend the erosive burning model from flat-plate geometry to axisymmetric flow. 9. Validate the 2- D model of erosive burning by experimental

  7. Optimal thrust level for orbit insertion

    NASA Astrophysics Data System (ADS)

    Cerf, Max

    2017-07-01

    The minimum-fuel orbital transfer is analyzed in the case of a launcher upper stage using a constantly thrusting engine. The thrust level is assumed to be constant and its value is optimized together with the thrust direction. A closed-loop solution for the thrust direction is derived from the extremal analysis for a planar orbital transfer. The optimal control problem reduces to two unknowns, namely the thrust level and the final time. Guessing and propagating the costates is no longer necessary and the optimal trajectory is easily found from a rough initialization. On the other hand the initial costates are assessed analytically from the initial conditions and they can be used as initial guess for transfers at different thrust levels. The method is exemplified on a launcher upper stage targeting a geostationary transfer orbit.

  8. Optimal Control of Connected and Automated Vehicles at Roundabouts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Liuhui; Malikopoulos, Andreas; Rios-Torres, Jackeline

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulationmore » and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.« less

  9. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    PubMed

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  10. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Optimism and Cause-Specific Mortality: A Prospective Cohort Study.

    PubMed

    Kim, Eric S; Hagan, Kaitlin A; Grodstein, Francine; DeMeo, Dawn L; De Vivo, Immaculata; Kubzansky, Laura D

    2017-01-01

    Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses' Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Let's Talk... Analytics

    ERIC Educational Resources Information Center

    Oblinger, Diana G.

    2012-01-01

    Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)

  13. Parametric Study and Multi-Criteria Optimization in Laser Cladding by a High Power Direct Diode Laser

    NASA Astrophysics Data System (ADS)

    Farahmand, Parisa; Kovacevic, Radovan

    2014-12-01

    In laser cladding, the performance of the deposited layers subjected to severe working conditions (e.g., wear and high temperature conditions) depends on the mechanical properties, the metallurgical bond to the substrate, and the percentage of dilution. The clad geometry and mechanical characteristics of the deposited layer are influenced greatly by the type of laser used as a heat source and process parameters used. Nowadays, the quality of fabricated coating by laser cladding and the efficiency of this process has improved thanks to the development of high-power diode lasers, with power up to 10 kW. In this study, the laser cladding by a high power direct diode laser (HPDDL) as a new heat source in laser cladding was investigated in detail. The high alloy tool steel material (AISI H13) as feedstock was deposited on mild steel (ASTM A36) by a HPDDL up to 8kW laser and with new design lateral feeding nozzle. The influences of the main process parameters (laser power, powder flow rate, and scanning speed) on the clad-bead geometry (specifically layer height and depth of the heat affected zone), and clad microhardness were studied. Multiple regression analysis was used to develop the analytical models for desired output properties according to input process parameters. The Analysis of Variance was applied to check the accuracy of the developed models. The response surface methodology (RSM) and desirability function were used for multi-criteria optimization of the cladding process. In order to investigate the effect of process parameters on the molten pool evolution, in-situ monitoring was utilized. Finally, the validation results for optimized process conditions show the predicted results were in a good agreement with measured values. The multi-criteria optimization makes it possible to acquire an efficient process for a combination of clad geometrical and mechanical characteristics control.

  14. Application of the Taguchi analytical method for optimization of effective parameters of the chemical vapor deposition process controlling the production of nanotubes/nanobeads.

    PubMed

    Sharon, Maheshwar; Apte, P R; Purandare, S C; Zacharia, Renju

    2005-02-01

    Seven variable parameters of the chemical vapor deposition system have been optimized with the help of the Taguchi analytical method for getting a desired product, e.g., carbon nanotubes or carbon nanobeads. It is observed that almost all selected parameters influence the growth of carbon nanotubes. However, among them, the nature of precursor (racemic, R or Technical grade camphor) and the carrier gas (hydrogen, argon and mixture of argon/hydrogen) seem to be more important parameters affecting the growth of carbon nanotubes. Whereas, for the growth of nanobeads, out of seven parameters, only two, i.e., catalyst (powder of iron, cobalt, and nickel) and temperature (1023 K, 1123 K, and 1273 K), are the most influential parameters. Systematic defects or islands on the substrate surface enhance nucleation of novel carbon materials. Quantitative contributions of process parameters as well as optimum factor levels are obtained by performing analysis of variance (ANOVA) and analysis of mean (ANOM), respectively.

  15. Analytical study of shimmy of airplane wheels

    NASA Technical Reports Server (NTRS)

    Bourcier De Carbon, Christian

    1952-01-01

    The problem of shimmy of a castering wheel, such as the nose wheel of a tricycle gear airplane, is treated analytically. The flexibility of the tire is considered to be the primary cause of shimmy. The rather simple theory developed agrees rather well with previous experimental results. The author suggests that shimmy may be eliminated through a suitable choice of landing gear dimensions in lieu of a damper.

  16. Analytics for Education

    ERIC Educational Resources Information Center

    MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin

    2014-01-01

    This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…

  17. Analytical and Experimental Study of Near-Threshold Interactions Between Crack Closure Mechanisms

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Riddell, William T.; Piascik, Robert S.

    2003-01-01

    The results of an analytical closure model that considers contributions and interactions between plasticity-, roughness-, and oxide-induced crack closure mechanisms are presented and compared with experimental data. The analytical model is shown to provide a good description of the combined influences of crack roughness, oxide debris, and plasticity in the near-threshold regime. Furthermore, analytical results indicate that closure mechanisms interact in a non-linear manner such that the total amount of closure is not the sum of closure contributions for each mechanism.

  18. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  19. Analytical stability and simulation response study for a coupled two-body system

    NASA Technical Reports Server (NTRS)

    Tao, K. M.; Roberts, J. R.

    1975-01-01

    An analytical stability study and a digital simulation response study of two connected rigid bodies are documented. Relative rotation of the bodies at the connection is allowed, thereby providing a model suitable for studying system stability and response during a soft-dock regime. Provisions are made of a docking port axes alignment torque and a despin torque capability for encountering spinning payloads. Although the stability analysis is based on linearized equations, the digital simulation is based on nonlinear models.

  20. The Effect of Brain Based Learning on Academic Achievement: A Meta-Analytical Study

    ERIC Educational Resources Information Center

    Gozuyesil, Eda; Dikici, Ayhan

    2014-01-01

    This study's aim is to measure the effect sizes of the quantitative studies that examined the effectiveness of brain-based learning on students' academic achievement and to examine with the meta-analytical method if there is a significant difference in effect in terms of the factors of education level, subject matter, sampling size, and the…

  1. Optimal design of a thermally stable composite optical bench

    NASA Technical Reports Server (NTRS)

    Gray, C. E., Jr.

    1985-01-01

    The Lidar Atmospheric Sensing Experiment will be performed aboard an ER-2 aircraft; the lidar system used will be mounted on a lightweight, thermally stable graphite/epoxy optical bench whose design is presently subjected to analytical study and experimental validation. Attention is given to analytical methods for the selection of such expected laminate properties as the thermal expansion coefficient, the apparent in-plane moduli, and ultimate strength. For a symmetric laminate in which one of the lamina angles remains variable, an optimal lamina angle is selected to produce a design laminate with a near-zero coefficient of thermal expansion. Finite elements are used to model the structural concept of the design, with a view to the optical bench's thermal structural response as well as the determination of the degree of success in meeting the experiment's alignment tolerances.

  2. The "Journal of Learning Analytics": Supporting and Promoting Learning Analytics Research

    ERIC Educational Resources Information Center

    Siemens, George

    2014-01-01

    The paper gives a brief overview of the main activities for the development of the emerging field of learning analytics led by the Society for Learning Analytics Research (SoLAR). The place of the "Journal of Learning Analytics" is identified. Analytics is the most significant new initiative of SoLAR.

  3. Integrative energy-systems design: System structure from thermodynamic optimization

    NASA Astrophysics Data System (ADS)

    Ordonez, Juan Carlos

    This thesis deals with the application of thermodynamic optimization to find optimal structure and operation conditions of energy systems. Chapter 1 outlines the thermodynamic optimization of a combined power and refrigeration system subject to constraints. It is shown that the thermodynamic optimum is reached by distributing optimally the heat exchanger inventory. Chapter 2 considers the maximization of power extraction from a hot stream in the presence of phase change. It shows that when the receiving (cold) stream boils in a counterflow heat exchanger, the thermodynamic optimization consists of locating the optimal capacity rate of the cold stream. Chapter 3 shows that the main architectural features of a counterflow heat exchanger can be determined based on thermodynamic optimization subject to volume constraint. Chapter 4 addresses two basic issues in the thermodynamic optimization of environmental control systems (ECS) for aircraft: realistic limits for the minimal power requirement, and design features that facilitate operation at minimal power consumption. Several models of the ECS-Cabin interaction are considered and it is shown that in all the models the temperature of the air stream that the ECS delivers to the cabin can be optimized for operation at minimal power. In chapter 5 it is shown that the sizes (weights) of heat and fluid flow systems that function on board vehicles such as aircraft can be derived from the maximization of overall (system level) performance. Chapter 6 develops analytically the optimal sizes (hydraulic diameters) of parallel channels that penetrate and cool a volume with uniformly distributed internal heat generation and Chapter 7 shows analytically and numerically how an originally uniform flow structure transforms itself into a nonuniform one when the objective is to minimize global flow losses. It is shown that flow maldistribution and the abandonment of symmetry are necessary for the development of flow structures with

  4. Extended analytical formulas for the perturbed Keplerian motion under a constant control acceleration

    NASA Astrophysics Data System (ADS)

    Zuiani, Federico; Vasile, Massimiliano

    2015-03-01

    This paper presents a set of analytical formulae for the perturbed Keplerian motion of a spacecraft under the effect of a constant control acceleration. The proposed set of formulae can treat control accelerations that are fixed in either a rotating or inertial reference frame. Moreover, the contribution of the zonal harmonic is included in the analytical formulae. It will be shown that the proposed analytical theory allows for the fast computation of long, multi-revolution spirals while maintaining good accuracy. The combined effect of different perturbations and of the shadow regions due to solar eclipse is also included. Furthermore, a simplified control parameterisation is introduced to optimise thrusting patterns with two thrust arcs and two cost arcs per revolution. This simple parameterisation is shown to ensure enough flexibility to describe complex low thrust spirals. The accuracy and speed of the proposed analytical formulae are compared against a full numerical integration with different integration schemes. An averaging technique is then proposed as an application of the analytical formulae. Finally, the paper presents an example of design of an optimal low-thrust spiral to transfer a spacecraft from an elliptical to a circular orbit around the Earth.

  5. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  6. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  7. Curriculum Innovation for Marketing Analytics

    ERIC Educational Resources Information Center

    Wilson, Elizabeth J.; McCabe, Catherine; Smith, Robert S.

    2018-01-01

    College graduates need better preparation for and experience in data analytics for higher-quality problem solving. Using the curriculum innovation framework of Borin, Metcalf, and Tietje (2007) and case study research methods, we offer rich insights about one higher education institution's work to address the marketing analytics skills gap.…

  8. Electro-Analytical Study of Material Interfaces Relevant for Chemical Mechanical Planarization and Lithium Ion Batteries

    NASA Astrophysics Data System (ADS)

    Turk, Michael C.

    galvanic corrosions in chemically controlled low-pressure CMP. The CMP specific functions of the slurry components are characterized in the tribo-electro-analytical approach by using voltammetry, open circuit potential (OCP) measurements and electrochemical impedance spectroscopy (EIS) in the presence as well as in the absence of surface abrasion, both with and without the inclusion of colloidal silica (SiO2) abrasives. The results are used to understand the reaction mechanisms responsible for supporting material removal and corrosion suppression. The project carried out in the area of Li ion batteries (LIBs) uses electro-analytical techniques to probe electrolyte characteristics as well as electrode material performance. The investigation concentrates on optimizing a tactically chosen set of electrolyte compositions for low-to-moderate temperature applications of lithium titanium oxide (LTO), a relatively new anode material for such batteries. For this application, mixtures of non-aqueous carbonate based solvents are studied in combination with lithium perchlorate. The temperature dependent conductivities of the electrolytes are rigorously measured and analyzed using EIS. The experimental considerations and the working principle of this EIS based approach are carefully examined and standardized in the course of this study. These experiments also investigate the effects of temperature variations (below room temperature) on the solid electrolyte interphase (SEI) formation characteristics of LTO in the given electrolytes. This dissertation is organized as follows: Each experimental system and its relevance for practical applications are briefly introduced in each chapter. The experimental approach and the motivation for carrying out the investigation are also noted in that context. The experimental details specific to the particular study are described. This is followed by the results and their discussion, and subsequently, by the specific conclusions drawn from the given

  9. Molecular motion in cell membranes: Analytic study of fence-hindered random walks

    NASA Astrophysics Data System (ADS)

    Kenkre, V. M.; Giuggioli, L.; Kalay, Z.

    2008-05-01

    A theoretical calculation is presented to describe the confined motion of transmembrane molecules in cell membranes. The study is analytic, based on Master equations for the probability of the molecules moving as random walkers, and leads to explicit usable solutions including expressions for the molecular mean square displacement and effective diffusion constants. One outcome is a detailed understanding of the dependence of the time variation of the mean square displacement on the initial placement of the molecule within the confined region. How to use the calculations is illustrated by extracting (confinement) compartment sizes from experimentally reported published observations from single particle tracking experiments on the diffusion of gold-tagged G -protein coupled μ -opioid receptors in the normal rat kidney cell membrane, and by further comparing the analytical results to observations on the diffusion of phospholipids, also in normal rat kidney cells.

  10. Study of a vibrating plate: comparison between experimental (ESPI) and analytical results

    NASA Astrophysics Data System (ADS)

    Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.

    2003-07-01

    Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.

  11. An analytical study of electric vehicle handling dynamics

    NASA Technical Reports Server (NTRS)

    Greene, J. E.; Segal, D. J.

    1979-01-01

    Hypothetical electric vehicle configurations were studied by applying available analytical methods. Elementary linearized models were used in addition to a highly sophisticated vehicle dynamics computer simulation technique. Physical properties of specific EV's were defined for various battery and powertrain packaging approaches applied to a range of weight distribution and inertial properties which characterize a generic class of EV's. Computer simulations of structured maneuvers were performed for predicting handling qualities in the normal driving range and during various extreme conditions related to accident avoidance. Results indicate that an EV with forward weight bias will possess handling qualities superior to a comparable EV that is rear-heavy or equally balanced. The importance of properly matching tires, suspension systems, and brake system front/rear torque proportioning to a given EV configuration during the design stage is demonstrated.

  12. Quantum dot laser optimization: selectively doped layers

    NASA Astrophysics Data System (ADS)

    Korenev, Vladimir V.; Konoplev, Sergey S.; Savelyev, Artem V.; Shernyakov, Yurii M.; Maximov, Mikhail V.; Zhukov, Alexey E.

    2016-08-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics.

  13. Evaluation of a reduced centrifugation time and higher centrifugal force on various general chemistry and immunochemistry analytes in plasma and serum.

    PubMed

    Møller, Mette F; Søndergaard, Tove R; Kristensen, Helle T; Münster, Anna-Marie B

    2017-09-01

    Background Centrifugation of blood samples is an essential preanalytical step in the clinical biochemistry laboratory. Centrifugation settings are often altered to optimize sample flow and turnaround time. Few studies have addressed the effect of altering centrifugation settings on analytical quality, and almost all studies have been done using collection tubes with gel separator. Methods In this study, we compared a centrifugation time of 5 min at 3000 ×  g to a standard protocol of 10 min at 2200 ×  g. Nine selected general chemistry and immunochemistry analytes and interference indices were studied in lithium heparin plasma tubes and serum tubes without gel separator. Results were evaluated using mean bias, difference plots and coefficient of variation, compared with maximum allowable bias and coefficient of variation used in laboratory routine quality control. Results For all analytes except lactate dehydrogenase, the results were within the predefined acceptance criteria, indicating that the analytical quality was not compromised. Lactate dehydrogenase showed higher values after centrifugation for 5 min at 3000 ×  g, mean bias was 6.3 ± 2.2% and the coefficient of variation was 5%. Conclusions We found that a centrifugation protocol of 5 min at 3000 ×  g can be used for the general chemistry and immunochemistry analytes studied, with the possible exception of lactate dehydrogenase, which requires further assessment.

  14. Optimized open-flow mixing: insights from microbubble streaming

    NASA Astrophysics Data System (ADS)

    Rallabandi, Bhargav; Wang, Cheng; Guo, Lin; Hilgenfeldt, Sascha

    2015-11-01

    Microbubble streaming has been developed into a robust and powerful flow actuation technique in microfluidics. Here, we study it as a paradigmatic system for microfluidic mixing under a continuous throughput of fluid (open-flow mixing), providing a systematic optimization of the device parameters in this practically important situation. Focusing on two-dimensional advective stirring (neglecting diffusion), we show through numerical simulation and analytical theory that mixing in steady streaming vortices becomes ineffective beyond a characteristic time scale, necessitating the introduction of unsteadiness. By duty cycling the streaming, such unsteadiness is introduced in a controlled fashion, leading to exponential refinement of the advection structures. The rate of refinement is then optimized for particular parameters of the time modulation, i.e. a particular combination of times for which the streaming is turned ``on'' and ``off''. The optimized protocol can be understood theoretically using the properties of the streaming vortices and the throughput Poiseuille flow. We can thus infer simple design principles for practical open flow micromixing applications, consistent with experiments. Current Address: Mechanical and Aerospace Engineering, Princeton University.

  15. Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization

    PubMed Central

    Melchor, Juan; Muñoz, Rafael; Rus, Guillermo

    2017-01-01

    Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353

  16. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  17. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  18. Green analytical chemistry introduction to chloropropanols determination at no economic and analytical performance costs?

    PubMed

    Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek

    2016-01-15

    In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Analytical performance of a bronchial genomic classifier.

    PubMed

    Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean

    2016-02-26

    The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.

  20. Analytical modeling and numerical simulation of the short-wave infrared electron-injection detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Movassaghi, Yashar; Fathipour, Morteza; Fathipour, Vala

    2016-03-21

    This paper describes comprehensive analytical and simulation models for the design and optimization of the electron-injection based detectors. The electron-injection detectors evaluated here operate in the short-wave infrared range and utilize a type-II band alignment in InP/GaAsSb/InGaAs material system. The unique geometry of detectors along with an inherent negative-feedback mechanism in the device allows for achieving high internal avalanche-free amplifications without any excess noise. Physics-based closed-form analytical models are derived for the detector rise time and dark current. Our optical gain model takes into account the drop in the optical gain at high optical power levels. Furthermore, numerical simulation studiesmore » of the electrical characteristics of the device show good agreement with our analytical models as well experimental data. Performance comparison between devices with different injector sizes shows that enhancement in the gain and speed is anticipated by reducing the injector size. Sensitivity analysis for the key detector parameters shows the relative importance of each parameter. The results of this study may provide useful information and guidelines for development of future electron-injection based detectors as well as other heterojunction photodetectors.« less