NASA Astrophysics Data System (ADS)
Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui
2018-02-01
Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.
Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.
ERIC Educational Resources Information Center
Raymond, Margaret; And Others
1983-01-01
Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control
NASA Astrophysics Data System (ADS)
Hu, Juju; Ke, Qiang; Ji, Yinghua
2018-02-01
The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
Standardized Methods for Electronic Shearography
NASA Technical Reports Server (NTRS)
Lansing, Matthew D.
1997-01-01
Research was conducted in development of operating procedures and standard methods to evaluate fiber reinforced composite materials, bonded or sprayed insulation, coatings, and laminated structures with MSFC electronic shearography systems. Optimal operating procedures were developed for the Pratt and Whitney Electronic Holography/Shearography Inspection System (EH/SIS) operating in shearography mode, as well as the Laser Technology, Inc. (LTI) SC-4000 and Ettemeyer SHS-94 ISTRA shearography systems. Operating practices for exciting the components being inspected were studied, including optimal methods for transient heating with heat lamps and other methods as appropriate to enhance inspection capability.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparison of Optimal Design Methods in Inverse Problems
2011-05-11
corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each
NASA Astrophysics Data System (ADS)
Ouyang, Bo; Shang, Weiwei
2016-03-01
The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.
Ye, Bixiong; E, Xueli; Zhang, Lan
2015-01-01
To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.
A three-dimensional topology optimization model for tooth-root morphology.
Seitz, K-F; Grabe, J; Köhne, T
2018-02-01
To obtain the root of a lower incisor through structural optimization, we used two methods: optimization with Solid Isotropic Material with Penalization (SIMP) and Soft-Kill Option (SKO). The optimization was carried out in combination with a finite element analysis in Abaqus/Standard. The model geometry was based on cone-beam tomography scans of 10 adult males with healthy bone-tooth interface. Our results demonstrate that the optimization method using SIMP for minimum compliance could not adequately predict the actual root shape. The SKO method, however, provided optimization results that were comparable to the natural root form and is therefore suitable to set up the basic topology of a dental root.
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Optimization of a chondrogenic medium through the use of factorial design of experiments.
Enochson, Lars; Brittberg, Mats; Lindahl, Anders
2012-12-01
The standard culture system for in vitro cartilage research is based on cells in a three-dimensional micromass culture and a defined medium containing the chondrogenic key growth factor, transforming growth factor (TGF)-β1. The aim of this study was to optimize the medium for chondrocyte micromass culture. Human chondrocytes were cultured in different media formulations, designed with a factorial design of experiments (DoE) approach and based on the standard medium for redifferentiation. The significant factors for the redifferentiation of the chondrocytes were determined and optimized in a two-step process through the use of response surface methodology. TGF-β1, dexamethasone, and glucose were significant factors for differentiating the chondrocytes. Compared to the standard medium, TGF-β1 was increased 30%, dexamethasone reduced 50%, and glucose increased 22%. The potency of the optimized medium was validated in a comparative study against the standard medium. The optimized medium resulted in micromass cultures with increased expression of genes important for the articular chondrocyte phenotype and in cultures with increased glycosaminoglycan/DNA content. Optimizing the standard medium with the efficient DoE method, a new medium that gave better redifferentiation for articular chondrocytes was determined.
Budczies, Jan; Klauschen, Frederick; Sinn, Bruno V.; Győrffy, Balázs; Schmitt, Wolfgang D.; Darb-Esfahani, Silvia; Denkert, Carsten
2012-01-01
Gene or protein expression data are usually represented by metric or at least ordinal variables. In order to translate a continuous variable into a clinical decision, it is necessary to determine a cutoff point and to stratify patients into two groups each requiring a different kind of treatment. Currently, there is no standard method or standard software for biomarker cutoff determination. Therefore, we developed Cutoff Finder, a bundle of optimization and visualization methods for cutoff determination that is accessible online. While one of the methods for cutoff optimization is based solely on the distribution of the marker under investigation, other methods optimize the correlation of the dichotomization with respect to an outcome or survival variable. We illustrate the functionality of Cutoff Finder by the analysis of the gene expression of estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer tissues. This distribution of these important markers is analyzed and correlated with immunohistologically determined ER status and distant metastasis free survival. Cutoff Finder is expected to fill a relevant gap in the available biometric software repertoire and will enable faster optimization of new diagnostic biomarkers. The tool can be accessed at http://molpath.charite.de/cutoff. PMID:23251644
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
UAV Mission Planning under Uncertainty
2006-06-01
algorithm , adapted from [13] . 57 4-5 Robust Optimization considers only a subset of the feasible region . 61 5-1 Overview of simulation with parameter...incorporates the robust optimization method suggested by Bertsimas and Sim [12], and is solved with a standard Branch- and-Cut algorithm . The chapter... algorithms , and the heuristic methods of Local Search methods and Simulated Annealing. With each method, we attempt to give a review of research that has
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
A three-term conjugate gradient method under the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa
2017-08-01
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.
Optimization of light quality from color mixing light-emitting diode systems for general lighting
NASA Astrophysics Data System (ADS)
Thorseth, Anders
2012-03-01
Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.
Determining the optimal load for jump squats: a review of methods and calculations.
Dugan, Eric L; Doyle, Tim L A; Humphries, Brendan; Hasson, Christopher J; Newton, Robert U
2004-08-01
There has been an increasing volume of research focused on the load that elicits maximum power output during jump squats. Because of a lack of standardization for data collection and analysis protocols, results of much of this research are contradictory. The purpose of this paper is to examine why differing methods of data collection and analysis can lead to conflicting results for maximum power and associated optimal load. Six topics relevant to measurement and reporting of maximum power and optimal load are addressed: (a) data collection equipment, (b) inclusion or exclusion of body weight force in calculations of power, (c) free weight versus Smith machine jump squats, (d) reporting of average versus peak power, (e) reporting of load intensity, and (f) instructions given to athletes/ participants. Based on this information, a standardized protocol for data collection and reporting of jump squat power and optimal load is presented.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
NASA Astrophysics Data System (ADS)
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
Karmarkar, S; Yang, X; Garber, R; Szajkovics, A; Koberda, M
2014-11-01
The USP monograph describes an HPLC method for seven impurities in the amiodarone drug substance using a L1 column, 4.6mm×150mm, 5μm packing (PF listed ODS2 GL-Science, Inertsil column) at 30°C with detection at 240nm. The standard contains 0.01mg/mL of amiodarone, and USP specified impurities D and E with a resolution requirement of NLT 3.5 between peaks D and E. Impurities in a 5mg/mL sample are quantitated against the standard. Impurity A peak elutes just before peak D. We observed two problems with the method; the column lot-to-lot variability resulted in unresolved A, D, and E peaks, and peak D in the sample preparation eluted much later than that in the standard solution. Therefore, optimization experiments were conducted on the USP method following the QbD approach with Fusion AE™ software (S-Matrix Corporation). The resulting optimized conditions were within the allowable changes per USP 〈621〉. Lot-to-lot variability was negligible with the Atlantis T3 (Waters Corporation) L1 column. Peak D retention time remained constant from standard to sample. The optimized method was validated in terms of accuracy, precision, linearity, range, LOQ/LOD, specificity, robustness, equivalency to the USP method, and solution stability. The QbD based development helped in generating a design space and operating space with knowledge of all method performance characteristics and limitations and successful method robustness within the operating space. Copyright © 2014 Elsevier B.V. All rights reserved.
Simulation and Modeling Capability for Standard Modular Hydropower Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.
Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.
Research on numerical method for multiple pollution source discharge and optimal reduction program
NASA Astrophysics Data System (ADS)
Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin
2018-03-01
In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
[Determination of trace gallium by graphite furnace atomic absorption spectrometry in urine].
Zhou, L Z; Fu, S; Gao, S Q; He, G W
2016-06-20
To establish a method for determination trace gallium in urine by graphite furnace atomic absorption spectrometry (GFAAS). The ammonium dihydrogen phosphate was matrix modifier. The temperature effect about pyrolysis (Tpyr) and atomization temperature were optimized for determination of trace gallium. The method of technical standard about within-run, between-run and recoveries of standard were optimized. The method showed a linear relationship within the range of 0.20~80.00 μg/L (r=0.998). The within-run and between-run relative standard deviations (RSD) of repetitive measurement at 5.0, 10.0, 20.0 μg/L concentration levels were 2.1%~5.5% and 2.3%~3.0%. The detection limit was 0.06 μg/L. The recoveries of gallium were 98.2%~101.1%. This method is simple, low detection limit, accurate, reliable and reproducible. It has been applied for determination of trace gallium in urine samples those who need occupation health examination or poisoning diagnosis.
Relaxation-optimized transfer of spin order in Ising spin chains
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis; Glaser, Steffen J.; Khaneja, Navin
2005-12-01
In this paper, we present relaxation optimized methods for the transfer of bilinear spin correlations along Ising spin chains. These relaxation optimized methods can be used as a building block for the transfer of polarization between distant spins on a spin chain, a problem that is ubiquitous in multidimensional nuclear magnetic resonance spectroscopy of proteins. Compared to standard techniques, significant reduction in relaxation losses is achieved by these optimized methods when transverse relaxation rates are much larger than the longitudinal relaxation rates and comparable to couplings between spins. We derive an upper bound on the efficiency of the transfer of the spin order along a chain of spins in the presence of relaxation and show that this bound can be approached by the relaxation optimized pulse sequences presented in the paper.
Computational alternatives to obtain time optimal jet engine control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Basso, R. J.; Leake, R. J.
1976-01-01
Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
Metafitting: Weight optimization for least-squares fitting of PTTI data
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.
Planning Under Uncertainty: Methods and Applications
2010-06-09
begun research into fundamental algorithms for optimization and re?optimization of continuous optimization problems (such as linear and quadratic... algorithm yields a 14.3% improvement over the original design while saving 68.2 % of the simulation evaluations compared to standard sample-path...They provide tools for building and justifying computational algorithms for such problems. Year. 2010 Month: 03 Final Research under this grant
Gao, Chen-chen; Li, Feng-min; Lu, Lun; Sun, Yue
2015-10-01
For the determination of trace amounts of phthalic acid esters (PAEs) in complex seawater matrix, a stir bar sorptive extraction gas chromatography mass spectrometry (SBSE-GC-MS) method was established. Dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dibutyl phthalate (2-ethylhexyl) phthalate (DEHP) and dioctyl phthalate (DOP) were selected as study objects. The effects of extraction time, amount of methanol, amount of sodium chloride, desorption time and desorption solvent were optimized. The method of SBSE-GC-MS was validated through recoveries and relative standard deviation. The optimal extraction time was 2 h. The optimal methanol content was 10%. The optimal sodium chloride content was 5% . The optimal desorption time was 50 min. The optimal desorption solvent was the mixture of methanol to acetonitrile (4:1, volume: volume). The linear relationship between the peak area and the concentration of PAEs was relevant. The correlation coefficients were greater than 0.997. The detection limits were between 0.25 and 174.42 ng x L(-1). The recoveries of different concentrations were between 56.97% and 124.22% . The relative standard deviations were between 0.41% and 14.39%. On the basis of the method, several estuaries water sample of Jiaozhou Bay were detected. DEP was detected in all samples, and the concentration of BBP, DEHP and DOP were much higher than the rest.
Ge, Meili; Shao, Yingqi; Huang, Jinbo; Huang, Zhendong; Zhang, Jing; Nie, Neng; Zheng, Yizhou
2013-01-01
Background Previous reports showed that outcome of rabbit antithymocyte globulin (rATG) was not satisfactory as the first-line therapy for severe aplastic anemia (SAA). We explored a modifying schedule of administration of rATG. Design and Methods Outcomes of a cohort of 175 SAA patients, including 51 patients administered with standard protocol (3.55 mg/kg/d for 5 days) and 124 cases with optimized protocol (1.97 mg/kg/d for 9 days) of rATG plus cyclosporine (CSA), were analyzed retrospectively. Results Of all 175 patients, response rates at 3 and 6 months were 36.6% and 56.0%, respectively. 51 cases received standard protocol had poor responses at 3 (25.5%) and 6 months (41.2%). However, 124 patients received optimized protocol had better responses at 3 (41.1%, P = 0.14) and 6 (62.1%, P = 0.01). Higher incidences of infection (57.1% versus 37.9%, P = 0.02) and early mortality (17.9% versus 0.8%, P<0.001) occurred in patients received standard protocol compared with optimized protocol. The 5-year overall survival in favor of the optimized over standard rATG protocol (76.0% versus. 50.3%, P<0.001) was observed. By multivariate analysis, optimized protocol (RR = 2.21, P = 0.04), response at 3 months (RR = 10.31, P = 0.03) and shorter interval (<23 days) between diagnosis and initial dose of rATG (RR = 5.35, P = 0.002) were independent favorable predictors of overall survival. Conclusions Optimized instead of standard rATG protocol in combination with CSA remained efficacious as a first-line immunosuppressive regimen for SAA. PMID:23554855
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
High-precision method of binocular camera calibration with a distortion model.
Li, Weimin; Shan, Siyu; Liu, Hui
2017-03-10
A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
Application of particle swarm optimization in path planning of mobile robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Cai, Feng; Wang, Ying
2017-08-01
In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.
Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos
2014-01-01
Objective To develop a study aiming at optimizing myocardial perfusion imaging. Materials and Methods Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The 99mTc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. Results The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. Conclusion The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol. PMID:25741088
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
An Optimized Method for the Measurement of Acetaldehyde by High-Performance Liquid Chromatography
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2011-01-01
Background Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase, and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). Methods We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent,, time and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DPN) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison to AcH-DPN standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Results Derivatization of acetaldehyde was performed at pH 4.0 with a 80-fold molar excess of DNPH. The reaction was completed in 40 min at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-min chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media, and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. Conclusions An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small volume sampling of culture media and biological fluids. PMID:21895715
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid
2007-10-18
The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid
2007-01-01
Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Methods for Large-Scale Nonlinear Optimization.
1980-05-01
STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library
Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping
2017-08-01
The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1993-01-01
A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.
Liu, Shu-Yu; Hu, Chang-Qin
2007-10-17
This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.
Adaptive particle swarm optimization for optimal orbital elements of binary stars
NASA Astrophysics Data System (ADS)
Attia, Abdel-Fattah
2016-12-01
The paper presents an adaptive particle swarm optimization (APSO) as an alternative method to determine the optimal orbital elements of the star η Bootis of MK type G0 IV. The proposed algorithm transforms the problem of finding periodic orbits into the problem of detecting global minimizers as a function, to get a best fit of Keplerian and Phase curves. The experimental results demonstrate that the proposed approach of APSO generally more accurate than the standard particle swarm optimization (PSO) and other published optimization algorithms, in terms of solution accuracy, convergence speed and algorithm reliability.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Application of Modern Fortran to Spacecraft Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.
2018-01-01
In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.
SEEK: A FORTRAN optimization program using a feasible directions gradient search
NASA Technical Reports Server (NTRS)
Savage, M.
1995-01-01
This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.
An effective hybrid firefly algorithm with harmony search for global numerical optimization.
Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan
2013-01-01
A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
An improved genetic algorithm for designing optimal temporal patterns of neural stimulation
NASA Astrophysics Data System (ADS)
Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.
2017-12-01
Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
NASA Astrophysics Data System (ADS)
Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad
2015-11-01
In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao
2018-03-01
Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Topology optimization in acoustics and elasto-acoustics via a level-set method
NASA Astrophysics Data System (ADS)
Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.
2018-04-01
Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.
An optimized method for the measurement of acetaldehyde by high-performance liquid chromatography.
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2012-03-01
Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood, and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent, time, and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DNP) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison with AcH-DNP standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Derivatization of acetaldehyde was performed at pH 4.0 with an 80-fold molar excess of DNPH. The reaction was completed in 40 minutes at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-minute chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small-volume sampling of culture media and biological fluids. Copyright © 2011 by the Research Society on Alcoholism.
Vrdoljak, Ivica; Panjkota Krbavčić, Ines; Bituh, Martina; Vrdoljak, Tea; Dujmić, Zoran
2015-05-01
To analyze how different thermal processing methods affect the protein, calcium, and phosphorus content of hospital food served to dialysis patients and to generate recommendations for preparing menus that optimize nutritional content while minimizing the risk of hyperphosphatemia. Standard Official Methods of Analysis (AOAC) methods were used to determine dry matter, protein, calcium, and phosphorus content in potatoes, fresh and frozen carrots, frozen green beans, chicken, beef and pork, frozen hake, pasta, and rice. These levels were determined both before and after boiling in water, steaming, stewing in oil or water, or roasting. Most of the thermal processing methods did not significantly reduce protein content. Boiling increased calcium content in all foodstuffs because of calcium absorption from the hard water. In contrast, stewing in oil containing a small amount of water decreased the calcium content of vegetables by 8% to 35% and of chicken meat by 12% to 40% on a dry weight basis. Some types of thermal processing significantly reduced the phosphorus content of the various foodstuffs, with levels decreasing by 27% to 43% for fresh and frozen vegetables, 10% to 49% for meat, 7% for pasta, and 22.8% for rice on a dry weight basis. On the basis of these results, we modified the thermal processing methods used to prepare a standard hospital menu for dialysis patients. Foodstuffs prepared according to the optimized menu were similar in protein content, higher in calcium, and significantly lower in phosphorus than foodstuffs prepared according to the standard menu. Boiling in water and stewing in oil containing some water significantly reduced phosphorus content without affecting protein content. Soaking meat in cold water for 1 h before thermal processing reduced phosphorus content even more. These results may help optimize the design of menus for dialysis patients. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.
Optimal Collision Avoidance Trajectories for Unmanned/Remotely Piloted Aircraft
2014-12-26
projected operational tempos (OPTEMPOs)” [15]. The Oce of the Secretary of Defense (OSD) Unmanned Systems Roadmap [15] goes on to say that the airspace...methods [63]. In an indirect method, the researcher derives the first- order necessary conditions for optimality “via the calculus of variations and...region around the ownship using a variation of a superquadric. From [116], the standard equation for a superellipsoid appears as: ✓ x a1 ◆ 2 ✏ 2
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
2017-10-17
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds
Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena
2017-01-01
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines. PMID:29155739
Optimal control penalty finite elements - Applications to integrodifferential equations
NASA Astrophysics Data System (ADS)
Chung, T. J.
The application of the optimal-control/penalty finite-element method to the solution of integrodifferential equations in radiative-heat-transfer problems (Chung et al.; Chung and Kim, 1982) is discussed and illustrated. The nonself-adjointness of the convective terms in the governing equations is treated by utilizing optimal-control cost functions and employing penalty functions to constrain auxiliary equations which permit the reduction of second-order derivatives to first order. The OCPFE method is applied to combined-mode heat transfer by conduction, convection, and radiation, both without and with scattering and viscous dissipation; the results are presented graphically and compared to those obtained by other methods. The OCPFE method is shown to give good results in cases where standard Galerkin FE fail, and to facilitate the investigation of scattering and dissipation effects.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Marschner, Karel; Musil, Stanislav; Dědina, Jiří
2016-04-05
An experimental setup consisting of a flow injection hydride generator coupled to an atomic fluorescence spectrometer was optimized in order to generate arsanes from tri- and pentavalent inorganic arsenic species (iAs(III), iAs(V)), monomethylarsonic acid (MAs(V)), and dimethylarsinic acid (DMAs(V)) with 100% efficiency with the use of only HCl and NaBH4 as the reagents. The optimal concentration of HCl was 2 mol L(-1); the optimal concentration of NaBH4 was 2.5% (m/v), and the volume of the reaction coil was 8.9 mL. To prevent excessive signal noise due to fluctuations of hydride supply to an atomizer, a new design of a gas-liquid separator was implemented. The optimized experimental setup was subsequently interfaced to HPLC and employed for speciation analysis of arsenic. Two chromatography columns were tested: (i) ion-pair chromatography and (ii) ion exchange chromatography. The latter offered much better results for human urine samples without a need for sample dilution. Due to the equal hydride generation efficiency (and thus the sensitivities) of all As species, a single species standardization by DMAs(V) standard was feasible. The limits of detection for iAs(III), iAs(V), MAs(V), and DMAs(V) were 40, 97, 57, and 55 pg mL(-1), respectively. Accuracy of the method was tested by the analysis of the standard reference material (human urine NIST 2669), and the method was also verified by the comparative analyses of human urine samples collected from five individuals with an independent reference method.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Sert, Şenol
2013-07-01
A comparison method for the determination (without sample pre-concentration) of uranium in ore by inductively coupled plasma optical emission spectrometry (ICP-OES) has been performed. The experiments were conducted using three procedures: matrix matching, plasma optimization, and internal standardization for three emission lines of uranium. Three wavelengths of Sm were tested as internal standard for the internal standardization method. The robust conditions were evaluated using applied radiofrequency power, nebulizer argon gas flow rate, and sample uptake flow rate by considering the intensity ratio of the Mg(II) 280.270 nm and Mg(I) 285.213 nm lines. Analytical characterization of method was assessed by limit of detection and relative standard deviation values. The certificated reference soil sample IAEA S-8 was analyzed, and the uranium determination at 367.007 nm with internal standardization using Sm at 359.260 nm has been shown to improve accuracy compared with other methods. The developed method was used for real uranium ore sample analysis.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
Soydaş, Emine; Bozkaya, Uğur
2015-04-14
An assessment of orbital-optimized MP2.5 (OMP2.5) [ Bozkaya, U.; Sherrill, C. D. J. Chem. Phys. 2014, 141, 204105 ] for thermochemistry and kinetics is presented. The OMP2.5 method is applied to closed- and open-shell reaction energies, barrier heights, and aromatic bond dissociation energies. The performance of OMP2.5 is compared with that of the MP2, OMP2, MP2.5, MP3, OMP3, CCSD, and CCSD(T) methods. For most of the test sets, the OMP2.5 method performs better than MP2.5 and CCSD, and provides accurate results. For barrier heights of radical reactions and aromatic bond dissociation energies OMP2.5-MP2.5, OMP2-MP2, and OMP3-MP3 differences become obvious. Especially, for aromatic bond dissociation energies, standard perturbation theory (MP) approaches dramatically fail, providing mean absolute errors (MAEs) of 22.5 (MP2), 17.7 (MP2.5), and 12.8 (MP3) kcal mol(-1), while the MAE values of the orbital-optimized counterparts are 2.7, 2.4, and 2.4 kcal mol(-1), respectively. Hence, there are 5-8-folds reductions in errors when optimized orbitals are employed. Our results demonstrate that standard MP approaches dramatically fail when the reference wave function suffers from the spin-contamination problem. On the other hand, the OMP2.5 method can reduce spin-contamination in the unrestricted Hartree-Fock (UHF) initial guess orbitals. For overall evaluation, we conclude that the OMP2.5 method is very helpful not only for challenging open-shell systems and transition-states but also for closed-shell molecules. Hence, one may prefer OMP2.5 over MP2.5 and CCSD as an O(N(6)) method, where N is the number of basis functions, for thermochemistry and kinetics. The cost of the OMP2.5 method is comparable with that of CCSD for energy computations. However, for analytic gradient computations, the OMP2.5 method is only half as expensive as CCSD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1999-02-10
Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less
Optimum SNR data compression in hardware using an Eigencoil array.
King, Scott B; Varosi, Steve M; Duensing, G Randy
2010-05-01
With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.
Kurihara, Miki; Ikeda, Koji; Izawa, Yoshinori; Deguchi, Yoshihiro; Tarui, Hitoshi
2003-10-20
A laser-induced breakdown spectroscopy (LIBS) technique has been applied for detection of unburned carbon in fly ash, and an automated LIBS unit has been developed and applied in a 1000-MW pulverized-coal-fired power plant for real-time measurement, specifically of unburned carbon in fly ash. Good agreement was found between measurement results from the LIBS method and those from the conventional method (Japanese Industrial Standard 8815), with a standard deviation of 0.27%. This result confirms that the measurement of unburned carbon in fly ash by use of LIBS is sufficiently accurate for boiler control. Measurements taken by this apparatus were also integrated into a boiler-control system with the objective of achieving optimal and stable combustion. By control of the rotating speed of a mill rotary separator relative to measured unburned-carbon content, it has been demonstrated that boiler control is possible in an optimized manner by use of the value of the unburned-carbon content of fly ash.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Deep Learning Methods for Improved Decoding of Linear Codes
NASA Astrophysics Data System (ADS)
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization
Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan
2013-01-01
A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Yang, Xi; Zhou, Tao; Yu, Lei; Tan, Wenwen; Zhou, Rui; Hu, Yonggang
2015-03-01
A competitive chemiluminescence enzyme immunoassay (CLEIA) method for porcine β-defensin-2 (pBD-2) detection in transgenic mice was established. Several factors that affect detection, including luminol, p-iodophenol and hydrogen peroxide concentrations, as well as pH, were studied and optimized. The linear range of the proposed method for pBD-2 detection under optimal conditions was 0.05-80 ng/mL with a correlation coefficient of 0.9960. Eleven detections of a 30 ng/mL pBD-2 standard sample were performed. Reproducible results were obtained with a relative standard deviation of 3.94%. The limit of detection of the method for pBD-2 was 3.5 pg/mL (3σ). The proposed method was applied to determine pBD-2 expression levels in the tissues of pBD-2 transgenic mice, and compared with LC-MS/MS and quantitative real-time reverse-transcriptase polymerase chain reaction. This suggests that the CLEIA can be used as a valuable method to detect and quantify pBD-2. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
Potential of spark ignition engine for increased fuel efficiency. Final report, January-October 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, T. Jr.; Cole, D.; Bolt, J.A.
The objective of this study was to assess the potential of the spark ignition engine to deliver maximum fuel efficiency at 1981 Statutory Emission Standards in the 1983-1984 timeframe and beyond that to 1990. Based on the results of an extensive literature search, manufacturer's known product plans, and fuel economies of 1978 engines as a baseline, proposed methods of attaining fuel economy while complying with the future standards were ascertained. Methods of engine control optimization, engine design optimization as well as methods of varying engine parameters were considered. The potential improvements in fuel economy associated with these methods, singly andmore » in combination, were determined and are expressed as percentage changes of the fuel economy of the baseline engines. A summary of the principal conclusions are presented, followed by a description of the engine baseline reference, analysis and projection of fuel economy improvements, and a preliminary assessment of the impact of fuel economy benefits on manufacturing cost.« less
Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration
NASA Astrophysics Data System (ADS)
Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut
2017-04-01
Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
Preparation method and quality control of multigamma volume sources with different matrices.
Listkowska, A; Lech, E; Saganowski, P; Tymiński, Z; Dziel, T; Cacko, D; Ziemek, T; Kołakowska, E; Broda, R
2018-04-01
The aim of the work was to develop new radioactive standard sources based on epoxy resins. The optimal proportions of the components and the homogeneity of the matrices were determined. The activity of multigamma sources prepared in Marinelli beakers was determined with reference to the National Standard of Radionuclides Activity in Poland. The difference of radionuclides activity values determined using calibrated gamma spectrometer and the activity of standard solutions used are in most cases significantly lower than measurement uncertainty limits. Sources production method and quality control procedure have been developed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F
2018-04-19
Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.
Evaluation of subset matching methods and forms of covariate balance.
de Los Angeles Resa, María; Zubizarreta, José R
2016-11-30
This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimizing 4DCBCT projection allocation to respiratory bins.
O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J
2014-10-07
4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.
NASA Astrophysics Data System (ADS)
Tsai, Cheng-Mu; Fang, Yi-Chin; Chen, Zhen Hsiang
2011-10-01
This study used the aspheric lens to realize the laser flat-top optimization, and applied the genetic algorithm (GA) to find the optimal results. Using the characteristics of aspheric lens to obtain the optimized high quality Nd: YAG 355 waveband laser flat-top optical system, this study employed the Light tools LDS (least damped square) and the GA of artificial intelligence optimization method to determine the optimal aspheric coefficient and obtain the optimal solution. This study applied the aspheric lens with GA for the flattening of laser beams using two aspheric lenses in the aspheric surface optical system to complete 80% spot narrowing under standard deviation of 0.6142.
Moschet, Christoph; Piazzoli, Alessandro; Singer, Heinz; Hollender, Juliane
2013-11-05
In this study, the efficiency of a suspect screening strategy using liquid chromatography-high resolution mass spectrometry (LC-HRMS) without the prior purchase of reference standards was systematically optimized and evaluated for assessing the exposure of rarely investigated pesticides and their transformation products (TPs) in 76 surface water samples. Water-soluble and readily ionizable (electrospray ionization) substances, 185 in total, were selected from a list of all insecticides and fungicides registered in Switzerland and their major TPs. Initially, a solid phase extraction-LC-HRMS method was established using 45 known, persistent, and high sales volume pesticides. Seventy percent of these target substances had limit of quantitation (LOQ) < 5 ng L(-1). This compound set was then used to develop and optimize a HRMS suspect screening method using only the exact mass as a priori information. Thresholds for blank subtraction, peak area, peak shape, signal-to-noise, and isotopic pattern were applied to automatically filter the initially picked peaks. The success rate was 70%; false negatives mainly resulted from low intense peaks. The optimized approach was applied to the remaining 140 substances. Nineteen additional substances were detected in environmental samples, two TPs for the first time in the environment. Sixteen substances were confirmed with reference standards purchased subsequently, while three TP standards could be obtained from industry or other laboratories. Overall, this screening approach was fast and very successful and can easily be expanded to other micropollutant classes for which reference standards are not readily accessible such as TPs of household chemicals.
Dos Anjos, Shirlei L; Alves, Jeferson C; Rocha Soares, Sarah A; Araujo, Rennan G O; de Oliveira, Olivia M C; Queiroz, Antonio F S; Ferreira, Sergio L C
2018-02-01
This work presents the optimization of a sample preparation procedure using microwave-assisted digestion for the determination of nickel and vanadium in crude oil employing inductively coupled plasma optical emission spectrometry (ICP OES). The optimization step was performed utilizing a two-level full factorial design involving the following factors: concentrated nitric acid and hydrogen peroxide volumes, and microwave-assisted digestion temperature. Nickel and vanadium concentrations were used as responses. Additionally, a multiple response based on the normalization of the concentrations by the highest values was built to establish a compromise condition between the two analytes. A Doehlert matrix optimized the instrumental conditions of the ICP OE spectrometer. In this design, the plasma robustness was used as chemometric response. The experiments were performed using a digested oil sample solution doped with magnesium(II) ions, as well as a standard magnesium solution. The optimized method allows for the determination of nickel and vanadium with quantification limits of 0.79 and 0.20μgg -1 , respectively, for a digested sample mass of 0.1g. The precision (expressed as relative standard deviations) was determined using five replicates of two oil samples and the results obtained were 1.63% and 3.67% for nickel and 0.42% and 4.64% for vanadium. Bismuth and yttrium were also tested as internal standards, and the results demonstrate that yttrium allows for a better precision for the method. The accuracy was confirmed by the analysis of the certified reference material trace element in fuel oil (CRM NIST 1634c). The proposed method was applied for the determination of nickel and vanadium in five crude oil samples from Brazilian Basins. The metal concentrations found varied from 7.30 to 33.21μgg -1 for nickel and from 0.63 to 19.42μgg -1 for vanadium. Copyright © 2017. Published by Elsevier B.V.
Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD)
Rhoades, Seth D.
2017-01-01
Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). Methods We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. Results LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method. PMID:28348510
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
NASA Astrophysics Data System (ADS)
Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi
2017-10-01
This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
Sampling bee communities using pan traps: alternative methods increase sample size
USDA-ARS?s Scientific Manuscript database
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD).
Rhoades, Seth D; Weljie, Aalim M
2016-12-01
Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.
Optimization of Statistical Methods Impact on Quantitative Proteomics Data.
Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L
2015-10-02
As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.
Control theory based airfoil design for potential flow and a finite volume discretization
NASA Technical Reports Server (NTRS)
Reuther, J.; Jameson, A.
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
NASA Astrophysics Data System (ADS)
Alimorad D., H.; Fakharzadeh J., A.
2017-07-01
In this paper, a new approach is proposed for designing the nearly-optimal three dimensional symmetric shapes with desired physical center of mass. Herein, the main goal is to find such a shape whose image in ( r, θ)-plane is a divided region into a fixed and variable part. The nearly optimal shape is characterized in two stages. Firstly, for each given domain, the nearly optimal surface is determined by changing the problem into a measure-theoretical one, replacing this with an equivalent infinite dimensional linear programming problem and approximating schemes; then, a suitable function that offers the optimal value of the objective function for any admissible given domain is defined. In the second stage, by applying a standard optimization method, the global minimizer surface and its related domain will be obtained whose smoothness is considered by applying outlier detection and smooth fitting methods. Finally, numerical examples are presented and the results are compared to show the advantages of the proposed approach.
Increasing the technical level of mining haul trucks
NASA Astrophysics Data System (ADS)
Voronov, Yuri; Voronov, Artyom; Grishin, Sergey; Bujankin, Alexey
2017-11-01
Theoretical and methodological fundamentals of mining haul trucks optimal design are articulated. Methods based on the systems approach to integrated assessment of truck technical level and methods for optimization of truck parameters depending on performance standards are provided. The results of using these methods are given. The developed method allows not only assessing the truck technical levels but also choosing the most promising models and providing quantitative evaluations of the decisions to be made at the design stage. These areas are closely connected with the problem of improvement in the industrial output quality, which, being a part of the widely spread in Western world "total quality control" ideology, is one of the major issues for the Russian economy.
Zhou, Jinhui; Xue, Xiaofeng; Li, Yi; Zhang, Jinzhen; Zhao, Jing
2007-01-01
An optimized reversed-phase high-performance liquid chromatography method was developed to detect the trans-10-hydroxy-2-decenoic acid (10-HDA) content in royal jelly cream and lyophilized powder. The sample was extracted using absolute ethanol. Chromatographic separation of 10-HDA and methyl 4-hydroxybenzoate as the internal standard was performed on a Nova-pak C18 column. The average recoveries were 95.0-99.2% (n = 5) with relative standard deviation (RSD) values of 1.3-2.1% for royal jelly cream and 98.0-100.0% (n = 5) with RSD values of 1.6-3.0% for lyophilized powder, respectively. The limits of detection and quantitation were 0.5 and 1.5 mg/kg, respectively, for both royal jelly cream and lyophilized powder. The method was validated for the determination of practical royal jelly products. The concentration of 10-HDA ranged from 1.26 to 2.21% for pure royal jelly cream samples and 3.01 to 6.19% for royal jelly lyophilized powder samples. For 30 royal jelly products, the 10-HDA content varied from not detectable to 0.98%.
Analytical sizing methods for behind-the-meter battery storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael; Yang, Tao
In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Improved NSGA model for multi objective operation scheduling and its evaluation
NASA Astrophysics Data System (ADS)
Li, Weining; Wang, Fuyu
2017-09-01
Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.
Yarazavi, Mina; Noroozian, Ebrahim
2018-02-13
A novel sol-gel coating on a stainless-steel fiber was developed for the first time for the headspace solid-phase microextraction and determination of α-bisabolol with gas chromatography and flame ionization detection. The parameters influencing the efficiency of solid-phase microextraction process, such as extraction time and temperature, pH, and ionic strength, were optimized by the experimental design method. Under optimized conditions, the linear range was between 0.0027 and 100 μg/mL. The relative standard deviations determined at 0.01 and 1.0 μg/mL concentration levels (n = 3), respectively, were as follows: intraday relative standard deviations 3.4 and 3.3%; interday relative standard deviations 5.0 and 4.3%; and fiber-to-fiber relative standard deviations 6.0 and 3.5%. The relative recovery values were 90.3 and 101.4% at 0.01 and 1.0 μg/mL spiking levels, respectively. The proposed method was successfully applied to various real samples containing α-bisabolol. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A comparison of automated dispensing cabinet optimization methods.
O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J
2016-07-01
Results of a study comparing two methods of optimizing automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were optimized over six months. Optimization of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after optimization were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before optimization to 5.2 after optimization; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC optimization via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Illumination system development using design and analysis of computer experiments
NASA Astrophysics Data System (ADS)
Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter
2015-09-01
Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.
Quality assurance and management in microelectronics companies: ISO 9000 versus Six Sigma
NASA Astrophysics Data System (ADS)
Lupan, Razvan; Kobi, Abdessamad; Robledo, Christian; Bacivarov, Ioan; Bacivarov, Angelica
2009-01-01
A strategy for the implementation of the Six Sigma method as an improvement solution for the ISO 9000:2000 Quality Standard is proposed. Our approach is focused on integrating the DMAIC cycle of the Six Sigma method with the PDCA process approach, highly recommended by the standard ISO 9000:2000. The Six Sigma steps applied to each part of the PDCA cycle are presented in detail, giving some tools and training examples. Based on this analysis the authors conclude that applying Six Sigma philosophy to the Quality Standard implementation process is the best way to achieve the optimal results in quality progress and therefore in customers satisfaction.
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
2017-04-17
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Determination of cadmium in seawater by chelate vapor generation atomic fluorescence spectrometry
NASA Astrophysics Data System (ADS)
Sun, Rui; Ma, Guopeng; Duan, Xuchuan; Sun, Jinsheng
2018-03-01
A method for the determination of cadmium in seawater by chelate vapor generation (Che-VG) atomic fluorescence spectrometry is described. Several commercially available chelating agents, including ammonium pyrrolidine dithiocarbamate (APDC), sodium dimethyl dithiocarbamate (DMDTC), ammonium dibutyl dithiophosphate (DBDTP) and sodium O,O-diethyl dithiophosphate (DEDTP), are compared with sodium diethyldithiocarbamate (DDTC) for the Che-VG of cadmium, and results showed that DDTC and DEDTP had very good cadmium signal intensity. The effect of the conditions of Che-VG with DDTC on the intensity of cadmium signal was investigated. Under the optimal conditions, 85 ± 3% Che-VG efficiency is obtained for cadmium. The detection limit (3σ) obtained in the optimal conditions was 0.19 ng ml- 1. The relative standard deviation (RSD, %) for ten replicate determinations at 2 ng ml- 1 Cd was 3.42%. The proposed method was successfully applied to the ultratrace determination of cadmium in seawater samples by the standard addition method.
A computational fluid dynamics simulation framework for ventricular catheter design optimization.
Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A
2017-11-10
OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%. CONCLUSIONS This customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.
Trujillo, William A.; Sorenson, Wendy R.; La Luzerne, Paul; Austad, John W.; Sullivan, Darryl
2008-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 μg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid Chromatography with ultraviolet detection (LC-UV) and LC/mass Spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 μg/g level, between 102 and 103% at the 10 μg/g level, and 104% at the 30 μg/g level. PMID:16915829
Trujillo, William A; Sorenson, Wendy R; La Luzerne, Paul; Austad, John W; Sullivan, Darryl
2006-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 microg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid chromatography with ultraviolet detection (LC-UV) and LC/mass spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 microg/g level, between 102 and 103% at the 10 microg/g level, and 104% at the 30 microg/g level.
Yolcu, Şükran Melda; Fırat, Merve; Chormey, Dotse Selali; Büyükpınar, Çağdaş; Turak, Fatma; Bakırdere, Sezgin
2018-05-01
In this study, dispersive liquid-liquid microextraction was systematically optimized for the preconcentration of nickel after forming a complex with diphenylcarbazone. The measurement output of the flame atomic absorption spectrometer was further enhanced by fitting a custom-cut slotted quartz tube to the flame burner head. The extraction method increased the amount of nickel reaching the flame and the slotted quartz tube increased the residence time of nickel atoms in the flame to record higher absorbance. Two methods combined to give about 90 fold enhancement in sensitivity over the conventional flame atomic absorption spectrometry. The optimized method was applicable over a wide linear concentration range, and it gave a detection limit of 2.1 µg L -1 . Low relative standard deviations at the lowest concentration in the linear calibration plot indicated high precision for both extraction process and instrumental measurements. A coal fly ash standard reference material (SRM 1633c) was used to determine the accuracy of the method, and experimented results were compatible with the certified value. Spiked recovery tests were also used to validate the applicability of the method.
Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu
2017-05-01
This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
Strong stabilization servo controller with optimization of performance criteria.
Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor
2011-07-01
Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Two-Method Planned Missing Designs for Longitudinal Research
ERIC Educational Resources Information Center
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.
2014-01-01
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Liu, W; Mohan, R
2012-06-01
Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.
Data Assimilation by delay-coordinate nudging
NASA Astrophysics Data System (ADS)
Pazo, Diego; Lopez, Juan Manuel; Carrassi, Alberto
2016-04-01
A new nudging method for data assimilation, delay-coordinate nudging, is presented. Delay-coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time-step. Numerical experiments with a low order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an un-optimized formulation of the delay-nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay-coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal-to-decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Jianping; Wang, Li; Zhang, Yu
The quality of wheel is especially important for the safety of high speed railway. In this paper, a new ultrasonic array inspection method, the Full Matrix Capture (FMC) has been studied and applied to the high speed railway wheel inspection, especially in the wheel web from the tread. Firstly, the principle of FMC and TFM algorithm is discussed, and then the new optimization is applied to the standard FMC; Secondly the fundamentals of optimization is described in detail and the performance is analyzed. Finally, the experiment has been built with a standard phased array block and railway wheel, and thenmore » the testing results are discussed and analyzed. It is demonstrated that this change for the ultrasonic data acquisition and image reconstruction has higher efficiency and lower cost comparing to the FMC's procedure.« less
Empirical optimization of DFT + U and HSE for the band structure of ZnO.
Bashyal, Keshab; Pyles, Christopher K; Afroosheh, Sajjad; Lamichhane, Aneer; Zayak, Alexey T
2018-02-14
ZnO is a well-known wide band gap semiconductor with promising potential for applications in optoelectronics, transparent electronics, and spintronics. Computational simulations based on the density functional theory (DFT) play an important role in the research of ZnO, but the standard functionals, like Perdew-Burke-Erzenhof, result in largely underestimated values of the band gap and the binding energies of the Zn 3d electrons. Methods like DFT + U and hybrid functionals are meant to remedy the weaknesses of plain DFT. However, both methods are not parameter-free. Direct comparison with experimental data is the best way to optimize the computational parameters. X-ray photoemission spectroscopy (XPS) is commonly considered as a benchmark for the computed electronic densities of states. In this work, both DFT + U and HSE methods were parametrized to fit almost exactly the binding energies of electrons in ZnO obtained by XPS. The optimized parameterizations of DFT + U and HSE lead to significantly worse results in reproducing the ion-clamped static dielectric tensor, compared to standard high-level calculations, including GW, which in turn yield a perfect match for the dielectric tensor. The failure of our XPS-based optimization reveals the fact that XPS does not report the ground state electronic structure for ZnO and should not be used for benchmarking ground state electronic structure calculations.
Empirical optimization of DFT + U and HSE for the band structure of ZnO
NASA Astrophysics Data System (ADS)
Bashyal, Keshab; Pyles, Christopher K.; Afroosheh, Sajjad; Lamichhane, Aneer; Zayak, Alexey T.
2018-02-01
ZnO is a well-known wide band gap semiconductor with promising potential for applications in optoelectronics, transparent electronics, and spintronics. Computational simulations based on the density functional theory (DFT) play an important role in the research of ZnO, but the standard functionals, like Perdew-Burke-Erzenhof, result in largely underestimated values of the band gap and the binding energies of the Zn3d electrons. Methods like DFT + U and hybrid functionals are meant to remedy the weaknesses of plain DFT. However, both methods are not parameter-free. Direct comparison with experimental data is the best way to optimize the computational parameters. X-ray photoemission spectroscopy (XPS) is commonly considered as a benchmark for the computed electronic densities of states. In this work, both DFT + U and HSE methods were parametrized to fit almost exactly the binding energies of electrons in ZnO obtained by XPS. The optimized parameterizations of DFT + U and HSE lead to significantly worse results in reproducing the ion-clamped static dielectric tensor, compared to standard high-level calculations, including GW, which in turn yield a perfect match for the dielectric tensor. The failure of our XPS-based optimization reveals the fact that XPS does not report the ground state electronic structure for ZnO and should not be used for benchmarking ground state electronic structure calculations.
DESIGN NOTE: New apparatus for haze measurement for transparent media
NASA Astrophysics Data System (ADS)
Yu, H. L.; Hsiao, C. C.; Liu, W. C.
2006-08-01
Precise measurement of luminous transmittance and haze of transparent media is increasingly important to the LCD industry. Currently there are at least three documentary standards for measuring transmission haze. Unfortunately, none of those standard methods by itself can obtain the precise values for the diffuse transmittance (DT), total transmittance (TT) and haze. This note presents a new apparatus capable of precisely measuring all three variables simultaneously. Compared with current structures, the proposed design contains one more compensatory port. For optimal design, the light trap absorbs the beam completely, light scattered by the instrument is zero and the interior surface of the integrating sphere, baffle, as well as the reflectance standard, are of equal characteristic. The accurate values of the TT, DT and haze can be obtained using the new apparatus. Even if the design is not optimal, the measurement errors of the new apparatus are smaller than those of other methods especially for high sphere reflectance. Therefore, the sphere can be made of a high reflectance material for the new apparatus to increase the signal-to-noise ratio.
Comparison of genetic algorithms with conjugate gradient methods
NASA Technical Reports Server (NTRS)
Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.
1972-01-01
Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.
Computing correct truncated excited state wavefunctions
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Practical approach to subject-specific estimation of knee joint contact force.
Knarr, Brian A; Higginson, Jill S
2015-08-20
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Practical approach to subject-specific estimation of knee joint contact force
Knarr, Brian A.; Higginson, Jill S.
2015-01-01
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
Co-state initialization for the minimum-time low-thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya
2017-05-01
This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos
2014-01-01
To develop a study aiming at optimizing myocardial perfusion imaging. Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The (99m)Tc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol.
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi
2015-06-01
A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Optimizing Standard Sequential Extraction Protocol With Lake And Ocean Sediments
The environmental mobility/availability behavior of radionuclides in soils and sediments depends on their speciation. Experiments have been carried out to develop a simple but robust radionuclide sequential extraction method for identification of radionuclide partitioning in sed...
Naser, Fuad J; Mahieu, Nathaniel G; Wang, Lingjue; Spalding, Jonathan L; Johnson, Stephen L; Patti, Gary J
2018-02-01
Although it is common in untargeted metabolomics to apply reversed-phase liquid chromatography (RPLC) and hydrophilic interaction liquid chromatography (HILIC) methods that have been systematically optimized for lipids and central carbon metabolites, here we show that these established protocols provide poor coverage of semipolar metabolites because of inadequate retention. Our objective was to develop an RPLC approach that improved detection of these metabolites without sacrificing lipid coverage. We initially evaluated columns recently released by Waters under the CORTECS line by analyzing 47 small-molecule standards that evenly span the nonpolar and semipolar ranges. An RPLC method commonly used in untargeted metabolomics was considered a benchmarking reference. We found that highly nonpolar and semipolar metabolites cannot be reliably profiled with any single method because of retention and solubility limitations of the injection solvent. Instead, we optimized a multiplexed approach using the CORTECS T3 column to analyze semipolar compounds and the CORTECS C 8 column to analyze lipids. Strikingly, we determined that combining these methods allowed detection of 41 of the total 47 standards, whereas our reference RPLC method detected only 10 of the 47 standards. We then applied credentialing to compare method performance at the comprehensive scale. The tandem method showed more than a fivefold increase in credentialing coverage relative to our RPLC benchmark. Our results demonstrate that comprehensive coverage of metabolites amenable to reversed-phase separation necessitates two reconstitution solvents and chromatographic methods. Thus, we suggest complementing HILIC methods with a dual T3 and C 8 RPLC approach to increase coverage of semipolar metabolites and lipids for untargeted metabolomics. Graphical abstract Analysis of semipolar and nonpolar metabolites necessitates two reversed-phase chromatography (RPLC) methods, which extend metabolome coverage more than fivefold for untargeted profiling. HILIC hydrophilic interaction liquid chromatography.
Control theory based airfoil design using the Euler equations
NASA Technical Reports Server (NTRS)
Jameson, Antony; Reuther, James
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Further developments in the controlled growth approach for optimal structural synthesis
NASA Technical Reports Server (NTRS)
Hajela, P.
1982-01-01
It is pointed out that the use of nonlinear programming methods in conjunction with finite element and other discrete analysis techniques have provided a powerful tool in the domain of optimal structural synthesis. The present investigation is concerned with new strategies which comprise an extension to the controlled growth method considered by Hajela and Sobieski-Sobieszczanski (1981). This method proposed an approach wherein the standard nonlinear programming (NLP) methodology of working with a very large number of design variables was replaced by a sequence of smaller optimization cycles, each involving a single 'dominant' variable. The current investigation outlines some new features. Attention is given to a modified cumulative constraint representation which is defined in both the feasible and infeasible domain of the design space. Other new features are related to the evaluation of the 'effectiveness measure' on which the choice of the dominant variable and the linking strategy is based.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Effects of inclination and eccentricity on optimal trajectories between earth and Venus
NASA Technical Reports Server (NTRS)
Gravier, J.-P.; Marchal, C.; Culp, R. D.
1973-01-01
The true optimal transfers, including the effects of the inclination and eccentricity of the planets' orbits, between earth and Venus are presented as functions of the corresponding idealized Hohmann transfers. The method of determining the optimal transfers using the calculus of variations is presented. For every possible Hohmann window, specified as a continuous function of the longitude of perihelion of the Hohmann trajectory, the corresponding numerically exact optimal two-impulse transfers are given in graphical form. The cases for which the optimal two-impulse transfer is the absolute optimal, and those for which a three-impulse transfer provides the absolute optimal transfer are indicated. This information furnishes everything necessary for quick and accurate orbit calculations for preliminary Venus mission analysis. This makes it possible to use the actual optimal transfers for advanced planning in place of the standard Hohmann transfers.
Data-optimized source modeling with the Backwards Liouville Test–Kinetic method
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...
2017-09-14
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Generalized t-statistic for two-group classification.
Komori, Osamu; Eguchi, Shinto; Copas, John B
2015-06-01
In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.
2010-10-01
Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate themore » importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.« less
Nutescu, Edith A; Wittkowsky, Ann K; Burnett, Allison; Merli, Geno J; Ansell, Jack E; Garcia, David A
2013-05-01
To provide recommendations for optimized anticoagulant therapy in the inpatient setting and outline broad elements that need to be in place for effective management of anticoagulant therapy in hospitalized patients; the guidelines are designed to promote optimization of patient clinical outcomes while minimizing the risks for potential anticoagulation-related errors and adverse events. The medical literature was reviewed using MEDLINE (1946-January 2013), EMBASE (1980-January 2013), and PubMed (1947-January 2013) for topics and key words including, but not limited to, standards of practice, national guidelines, patient safety initiatives, and regulatory requirements pertaining to anticoagulant use in the inpatient setting. Non-English-language publications were excluded. Specific MeSH terms used include algorithms, anticoagulants/administration and dosage/adverse effects/therapeutic use, clinical protocols/standards, decision support systems, drug monitoring/methods, humans, inpatients, efficiency/ organizational, outcome and process assessment (health care), patient care team/organization and administration, program development/standards, quality improvement/organization and administration, thrombosis/ drug therapy, thrombosis/prevention and control, risk assessment/standards, patient safety/standards, and risk management/methods. Because of this document's scope, the medical literature was searched using a variety of strategies. When possible, recommendations are supported by available evidence; however, because this paper deals with processes and systems of care, high-quality evidence (eg, controlled trials) is unavailable. In these cases, recommendations represent the consensus opinion of all authors and are endorsed by the Board of Directors of the Anticoagulation Forum, an organization dedicated to optimizing anticoagulation care. The board is composed of physicians, pharmacists, and nurses with demonstrated expertise and experience in the management of patients receiving anticoagulation therapy. Recommendations for delivering optimized inpatient anticoagulation therapy were developed collaboratively by the authors and are summarized in 8 key areas: (1) process, (2) accountability, (3) integration, (4) standards of practice, (5) provider education and competency, (6) patient education, (7) care transitions, and (8) outcomes. Recommendations are intended to inform the development of coordinated care systems containing elements with demonstrated benefit in improvement of anticoagulation therapy outcomes. Recommendations for delivering optimized inpatient anticoagulation therapy are intended to apply to all clinicians involved in the care of hospitalized patients receiving anticoagulation therapy. Anticoagulants are high-risk medications associated with a significant rate of medication errors among hospitalized patients. Several national organizations have introduced initiatives to reduce the likelihood of patient harm associated with the use of anticoagulants. Health care organizations are under increasing pressure to develop systems to ensure the safe and effective use of anticoagulants in the inpatient setting. This document provides consensus guidelines for anticoagulant therapy in the inpatient setting and serves as a companion document to prior guidelines relevant for outpatients.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.
2017-09-01
This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.
Research on illumination uniformity of high-power LED array light source
NASA Astrophysics Data System (ADS)
Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei
2018-06-01
Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.
Spectrally optimal illuminations for diabetic retinopathy detection in retinal imaging
NASA Astrophysics Data System (ADS)
Bartczak, Piotr; Fält, Pauli; Penttinen, Niko; Ylitepsa, Pasi; Laaksonen, Lauri; Lensu, Lasse; Hauta-Kasari, Markku; Uusitalo, Hannu
2017-04-01
Retinal photography is a standard method for recording retinal diseases for subsequent analysis and diagnosis. However, the currently used white light or red-free retinal imaging does not necessarily provide the best possible visibility of different types of retinal lesions, important when developing diagnostic tools for handheld devices, such as smartphones. Using specifically designed illumination, the visibility and contrast of retinal lesions could be improved. In this study, spectrally optimal illuminations for diabetic retinopathy lesion visualization are implemented using a spectrally tunable light source based on digital micromirror device. The applicability of this method was tested in vivo by taking retinal monochrome images from the eyes of five diabetic volunteers and two non-diabetic control subjects. For comparison to existing methods, we evaluated the contrast of retinal images taken with our method and red-free illumination. The preliminary results show that the use of optimal illuminations improved the contrast of diabetic lesions in retinal images by 30-70%, compared to the traditional red-free illumination imaging.
Zeric Stosic, Marina Z; Jaksic, Sandra M; Stojanov, Igor M; Apic, Jelena B; Ratajac, Radomir D
2016-11-01
High-performance liquid chromatography (HPLC) method with diode array detection (DAD) were optimized and validated for separation and determination of tetramethrin in an antiparasitic human shampoo. In order to optimize separation conditions, two different columns, different column oven temperatures, as well as mobile phase composition and ratio, were tested. Best separation was achieved on the Supelcosil TM LC-18- DB column (4.6 x 250 mm), particle size 5 jim, with mobile phase methanol : water (78 : 22, v/v) at a flow rate of 0.8 mL/min and at temperature of 30⁰C. The detection wavelength of the detector was set at 220 nm. Under the optimum chromatographic conditions, standard calibration curve was measured with good linearity [r2 = 0.9997]. Accuracy of the method defined as a mean recovery of tetramethrin from shampoo matrix was 100.09%. The advantages of this method are that it can easily be used for the routine analysis of drug tetramethrin in pharmaceutical formulas and in all pharmaceutical researches involving tetramethrin.
Uauy, R; Casanello, P; Krause, B; Kuzanovic, J P; Corvalan, C
2013-09-01
Healthy growth in utero and after birth is fundamental for lifelong health and wellbeing. The World Health Organization (WHO) recently published standards for healthy growth from birth to 6 years of age; analogous standards for healthy fetal growth are not currently available. Current fetal growth charts in use are not true standards, since they are based on cross-sectional measurements of attained size under conditions that do not accurately reflect normal growth. In most cases, the pregnant populations and environments studied are far from ideal; thus the data are unlikely to reflect optimal fetal growth. A true standard should reflect how fetuses and newborns 'should' grow under ideal environmental conditions. The development of prescriptive intrauterine and newborn growth standards derived from the INTERGROWTH-21(st) Project provides the data that will allow us for the first time to establish what is 'normal' fetal growth. The INTERGROWTH-21(st) study centres provide the data set obtained under pre-established standardised criteria, and details of the methods used are also published. Multicentre study with sites in all major geographical regions of the world using a standard evaluation protocol. These standards will assess risk of abnormal size at birth and serve to evaluate potentially effective interventions to promote optimal growth beyond securing survival. The new normative standards have the potential to impact perinatal and neonatal survival and beyond, particularly in developing countries where fetal growth restriction is most prevalent. They will help us identify intrauterine growth restriction at earlier stages of development, when preventive or corrective strategies might be more effective than at present. These growth standards will take us one step closer to effective action in preventing and potentially reversing abnormal intrauterine growth. Achieving 'optimal' fetal growth requires that we act not only during pregnancy but that we optimize the maternal uterine environment from the time before conception, through embryonic development until fetal growth is complete. The remaining challenge is how 'early' will we be able to act, now that we can better monitor fetal growth. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.
Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail
2018-07-01
A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.
A new look at the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1994-01-01
The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.
Matteson, Brent S; Hanson, Susan K; Miller, Jeffrey L; Oldham, Warren J
2015-04-01
An optimized method was developed to analyze environmental soil and sediment samples for (237)Np, (239)Pu, and (240)Pu by ICP-MS using a (242)Pu isotope dilution standard. The high yield, short time frame required for analysis, and the commercial availability of the (242)Pu tracer are significant advantages of the method. Control experiments designed to assess method uncertainty, including variation in inter-element fractionation that occurs during the purification protocol, suggest that the overall precision for measurements of (237)Np is typically on the order of ± 5%. Measurements of the (237)Np concentration in a Peruvian Soil blank (NIST SRM 4355) spiked with a known concentration of (237)Np tracer confirmed the accuracy of the method, agreeing well with the expected value. The method has been used to determine neptunium and plutonium concentrations in several environmental matrix standard reference materials available from NIST: SRM 4357 (Radioactivity Standard), SRM 1646a (Estuarine Sediment) and SRM 2702 (Inorganics in Marine Sediment). Copyright © 2015 Elsevier Ltd. All rights reserved.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Vertzoni, M V; Reppas, C; Archontaki, H A
2006-07-24
An isocratic high-performance liquid chromatographic method with detection at 240 nm was developed, optimized and validated for the determination of ketoconazole in canine plasma. 9-Acetylanthracene was used as internal standard. A Hypersil BDS RP-C18 column (250 mm x 4.6 mm, 5 microm particle size), was equilibrated with a mobile phase composed of methanol, water and diethylamine 74:26:0.1 (v/v/v). Its flow rate was 1 ml/min. The elution time for ketoconazole and 9-acetylanthracene was approximately 9 and 8 min, respectively. Calibration curves of ketoconazole in plasma were linear in the concentration range of 0.015-10 microg/ml. Limits of detection and quantification in plasma were 5 and 15 ng/ml, respectively. Recovery was greater than 95%. Intra- and inter-day relative standard deviation for ketoconazole in plasma was less than 3.1 and 4.7%, respectively. This method was applied to the determination of ketoconazole plasma levels after administration of a commercially available tablet to dogs.
León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.
2013-01-01
The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921
Boone, J Scott; Guan, Bing; Vigo, Craig; Boone, Tripp; Byrne, Christian; Ferrario, Joseph
2014-06-06
A trace analytical method was developed for the determination of seventeen specific perfluorinated chemicals (PFCs) in environmental and drinking waters. The objectives were to optimize an isotope-dilution method to increase the precision and accuracy of the analysis of the PFCs and to eliminate the need for matrix-matched standards. A 250 mL sample of environmental or drinking water was buffered to a pH of 4, spiked with labeled surrogate standards, extracted through solid phase extraction cartridges, and eluted with ammonium hydroxide in methyl tert-butyl ether: methanol solution. The sample eluents were concentrated to volume and analyzed by liquid chromatography/tandem mass spectrometry (LC-MS/MS). The lowest concentration minimal reporting levels (LCMRLs) for the seventeen PFCs were calculated and ranged from 0.034 to 0.600 ng/L for surface water and from 0.033 to 0.640 ng/L for drinking water. The relative standard deviations (RSDs) for all compounds were <20% for all concentrations above the LCMRL. The method proved effective and cost efficient and addressed the problems with the recovery of perfluorobutanoic acid (PFBA) and other short chain PFCs. Various surface water and drinking water samples were used during method development to optimize this method. The method was used to evaluate samples from the Mississippi River at New Orleans and drinking water samples from a private residence in that same city. The method was also used to determine PFC contamination in well water samples from a fire training area where perfluorinated foams were used in training to extinguish fires. Published by Elsevier B.V.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.
Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie
2018-01-01
In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Orthoclinostatic test as one of the methods for evaluating the human functional state
NASA Technical Reports Server (NTRS)
Doskin, V. A.; Gissen, L. D.; Bomshteyn, O. Z.; Merkin, E. N.; Sarychev, S. B.
1980-01-01
The possible use of different methods to evaluate the autonomic regulation in hygienic studies were examined. The simplest and most objective tests were selected. It is shown that the use of the optimized standards not only makes it possible to detect earlier unfavorables shifts, but also permits a quantitative characterization of the degree of impairment in the state of the organism. Precise interpretation of the observed shifts is possible. Results indicate that the standards can serve as one of the criteria for evaluating the state and can be widely used in hygienic practice.
The freshwater amphipod Hyalella azteca is a common organism used for sediment toxicity testing in the United States and elsewhere. Standard methods for 10-d and 42-d toxicity tests with H. azteca were last revised and published by USEPA/ASTM in 2000. Under the methods in the man...
The freshwater amphipod, Hyalella azteca, is a common organism used for sediment toxicity testing. Standard methods for 10-d and 42-d sediment toxicity tests with H. azteca were last revised and published by USEPA/ASTM in 2000. While Hyalella azteca methods exist for sediment tox...
ERIC Educational Resources Information Center
Parker, Patrick D.; Beers, Brandon; Vergne, Matthew J.
2017-01-01
Laboratory experiments were developed to introduce students to the quantitation of drugs of abuse by high performance liquid chromatography-tandem mass spectrometry (LC-MS/MS). Undergraduate students were introduced to internal standard quantitation and the LC-MS/MS method optimization for cocaine. Cocaine extracted from paper currency was…
Chi, Shuyao; Wu, Dike; Sun, Jinhong; Ye, Ruhan; Wang, Xiaoyan
2014-05-01
A headspace gas chromatography (HS-GC) method was developed for the simultaneous determination of seven residual solvents (petroleum ether (60-90 degrees C), acetone, ethyl acetate, methanol, methylene chloride, ethanol and butyl acetate) in bovis calculus artifactus. The DB-WAX capillary column and flame ionization detector (FID) were used for the separation and detection of the residual solvents, and the internal standard method was used for the quantification. The chromatographic conditions, such as equilibrium temperature and equilibrium time, were optimized. Under the optimized conditions, all of the seven residual solvents showed good linear relationships with good correlation coefficients (not less than 0.999 3) in the prescribed concentration range. At three spiked levels, the recoveries for the seven residual solvents were 94.7%-105.2% with the relative standard deviations (RSDs) less than 3.5%. The limits of detection (LODs) of the method were 0.43-5.23 mg/L, and the limits of quantification (LOQs) were 1.25-16.67 mg/L. The method is simple, rapid, sensitive and accurate, and is suitable for the simultaneous determination of the seven residual solvents in bovis calculus artifactus.
Measuring Thermal Conductivity at LH2 Temperatures
NASA Technical Reports Server (NTRS)
Selvidge, Shawn; Watwood, Michael C.
2004-01-01
For many years, the National Institute of Standards and Technology (NIST) produced reference materials for materials testing. One such reference material was intended for use with a guarded hot plate apparatus designed to meet the requirements of ASTM C177-97, "Standard Test Method for Steady-State Heat Flux Measurements and Thermal Transmission Properties by Means of the Guarded-Hot-Plate Apparatus." This apparatus can be used to test materials in various gaseous environments from atmospheric pressure to a vacuum. It allows the thermal transmission properties of insulating materials to be measured from just above ambient temperature down to temperatures below liquid hydrogen. However, NIST did not generate data below 77 K temperature for the reference material in question. This paper describes a test method used at NASA's Marshall Space Flight Center (MSFC) to optimize thermal conductivity measurements during the development of thermal protection systems. The test method extends the usability range of this reference material by generating data at temperatures lower than 77 K. Information provided by this test is discussed, as are the capabilities of the MSFC Hydrogen Test Facility, where advanced methods for materials testing are routinely developed and optimized in support of aerospace applications.
Design of optimal groundwater remediation systems under flexible environmental-standard constraints.
Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing
2015-01-01
In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.
NASA Astrophysics Data System (ADS)
Zheng, Qingyu; Zhang, Guoqiang; Che, Kai; Shao, Shikuan; Li, Yanfei
2017-08-01
Taking 660 MW generator unit denitration system as a study object, an optimization and adjustment method shall be designed to control ammonia slip, i.e. adjust ammonia injection system based on NO concentration distribution at inlet/outlet of the denitration system to make the injected ammonia distribute evenly. The results shows that, this method can effectively improve NO concentration distribution at outlet of the denitration system and decrease ammonia injection amount and ammonia slip concentration. Reduce adverse impact of SCR denitration process on the air preheater to realize safe production by guaranteeing that NO discharge shall reach the standard.
Path Planning Method in Multi-obstacle Marine Environment
NASA Astrophysics Data System (ADS)
Zhang, Jinpeng; Sun, Hanxv
2017-12-01
In this paper, an improved algorithm for particle swarm optimization is proposed for the application of underwater robot in the complex marine environment. Not only did consider to avoid obstacles when path planning, but also considered the current direction and the size effect on the performance of the robot dynamics. The algorithm uses the trunk binary tree structure to construct the path search space and A * heuristic search method is used in the search space to find a evaluation standard path. Then the particle swarm algorithm to optimize the path by adjusting evaluation function, which makes the underwater robot in the current navigation easier to control, and consume less energy.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María
2017-12-15
Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated geometric optimization for robotic HIFU treatment of liver tumors.
Williamson, Tom; Everitt, Scott; Chauhan, Sunita
2018-05-01
High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lock, Martin; Alvira, Mauricio R; Chen, Shu-Jen; Wilson, James M
2014-04-01
Accurate titration of adeno-associated viral (AAV) vector genome copies is critical for ensuring correct and reproducible dosing in both preclinical and clinical settings. Quantitative PCR (qPCR) is the current method of choice for titrating AAV genomes because of the simplicity, accuracy, and robustness of the assay. However, issues with qPCR-based determination of self-complementary AAV vector genome titers, due to primer-probe exclusion through genome self-annealing or through packaging of prematurely terminated defective interfering (DI) genomes, have been reported. Alternative qPCR, gel-based, or Southern blotting titering methods have been designed to overcome these issues but may represent a backward step from standard qPCR methods in terms of simplicity, robustness, and precision. Droplet digital PCR (ddPCR) is a new PCR technique that directly quantifies DNA copies with an unparalleled degree of precision and without the need for a standard curve or for a high degree of amplification efficiency; all properties that lend themselves to the accurate quantification of both single-stranded and self-complementary AAV genomes. Here we compare a ddPCR-based AAV genome titer assay with a standard and an optimized qPCR assay for the titration of both single-stranded and self-complementary AAV genomes. We demonstrate absolute quantification of single-stranded AAV vector genomes by ddPCR with up to 4-fold increases in titer over a standard qPCR titration but with equivalent readout to an optimized qPCR assay. In the case of self-complementary vectors, ddPCR titers were on average 5-, 1.9-, and 2.3-fold higher than those determined by standard qPCR, optimized qPCR, and agarose gel assays, respectively. Droplet digital PCR-based genome titering was superior to qPCR in terms of both intra- and interassay precision and is more resistant to PCR inhibitors, a desirable feature for in-process monitoring of early-stage vector production and for vector genome biodistribution analysis in inhibitory tissues.
Pezo Nikolić, Borka; Lovrić, Daniel; Ljubas Maček, Jana; Rešković Lukšić, Vlatka; Matasić, Richard; Šeparović Hanževački, Jadranka
2017-12-01
Some manufacturers do not provide automated intracardiac electrogram method (IEGM) systems for atrioventricular (AV) and interventricular (VV) delay optimization in cardiac resynchronization therapy (CRT). We aimed to evaluate the accuracy of manual IEGM method in 48 patients previously implanted with Medtronic Syncra CRT. All patients underwent standard device interrogation followed by CRT optimization by IEGM method and by echocardiography one month after implantation. The patient mean age was 60.7±11.8 years and there were 33 (68.8%) males. After CRT implantation, the left ventricular ejection fraction increased from 28.0±7.9% to 39.1±11.0% (p<0.001). Optimal aortic flow Velocity Time Integral (aVTI) was obtained when VV was set to 20-50 ms left ventricular pre-activation. There was a strong correlation between VV values determined by echocardiography and IEGM (R=0.823, p<0.001). We found no significant difference in AV, VV and aVTI values between echocardiography and IEGM method. However, IEGM was significantly less time-consuming than echocardiography [20 (10-28) vs. 40 (35-60) minutes, p<0.001]. Manual IEGM method may be good alternative to echocardiography and automated IEGM method. It also emphasizes the need for implementation of automated IEGM systems in as many CRT devices as possible.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Fontana, Ariel R; Patil, Sangram H; Banerjee, Kaushik; Altamirano, Jorgelina C
2010-04-28
A fast and effective microextraction technique is proposed for preconcentration of 2,4,6-trichloroanisole (2,4,6-TCA) from wine samples prior gas chromatography tandem mass spectrometric (GC-MS/MS) analysis. The proposed technique is based on ultrasonication (US) for favoring the emulsification phenomenon during the extraction stage. Several variables influencing the relative response of the target analyte were studied and optimized. Under optimal experimental conditions, 2,4,6-TCA was quantitatively extracted achieving enhancement factors (EF) > or = 400 and limits of detection (LODs) 0.6-0.7 ng L(-1) with relative standard deviations (RSDs) < or = 11.3%, when 10 ng L(-1) 2,4,6-TCA standard-wine sample blend was analyzed. The calibration graphs for white and red wine were linear within the range of 5-1000 ng L(-1), and estimation coefficients (r(2)) were > or = 0.9995. Validation of the methodology was carried out by standard addition method at two concentrations (10 and 50 ng L(-1)) achieving recoveries >80% indicating satisfactory robustness of the method. The methodology was successfully applied for determination of 2,4,6-TCA in different wine samples.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G
2016-08-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.
Method optimization for fathead minnow (Pimephales promelas) liver S9 isolation
Standard protocols have been proposed to assess metabolic stability in rainbow trout liver S9 fractions. Using in vitro substrate depletion assays, in vitro intrinsic clearance rates can be calculated for a variety of study compounds. Existing protocols suggest potential adaptati...
Improving Physician-Patient Communication through Coaching of Simulated Encounters
ERIC Educational Resources Information Center
Ravitz, Paula; Lancee, William J.; Lawson, Andrea; Maunder, Robert; Hunter, Jonathan J.; Leszcz, Molyn; McNaughton, Nancy; Pain, Clare
2013-01-01
Objective: Effective communication between physicians and their patients is important in optimizing patient care. This project tested a brief, intensive, interactive medical education intervention using coaching and standardized psychiatric patients to teach physician-patient communication to family medicine trainees. Methods: Twenty-six family…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdinov, O.; Abdallah, J.; Abeloos, B.; Aben, R.; Abolins, M.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Gonzalez, B. Alvarez; Piqueras, D. Álvarez; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Coutinho, Y. Amaral; Amelung, C.; Amidei, D.; Santos, S. P. Amor Dos; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Bella, L. Aperio; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Navarro, L. Barranco; Barreiro, F.; da Costa, J. Barreiro Guimarães; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Benitez, J.; Garcia, J. A. Benitez; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Kuutmann, E. Bergeaas; Berger, N.; Berghaus, F.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bylund, O. Bessidskaia; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; De Mendizabal, J. Bilbao; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Sola, J. D. Bossio; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Madden, W. D. Breaden; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; de Renstrom, P. A. Bruckman; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Brunt, B. H.; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Urbán, S. Cabrera; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Calvet, T. P.; Toro, R. Camacho; Camarda, S.; Camarri, P.; Cameron, D.; Armadans, R. Caminal; Camincher, C.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Bret, M. Cano; Cantero, J.; Cantrill, R.; Cao, T.; Garrido, M. D. M. Capeans; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelli, A.; Gimenez, V. Castillo; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Alberich, L. Cerda; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Barajas, C. A. Chavez; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Moursli, R. Cherkaoui El; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Muiño, P. Conde; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Ortuzar, M. Crispin; Cristinziani, M.; Croft, V.; Crosetti, G.; Donszelmann, T. Cuhadar; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; De Sousa, M. J. Da Cunha Sargedas; Via, C. Da; Dabrowski, W.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Hoffmann, M. Dano; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Regie, J. B. De Vivie; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Yildiz, H. Duran; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edson, W.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; Kacimi, M. El; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Giannelli, M. Faucci; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Martinez, P. Fernandez; Perez, S. Fernandez; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; de Lima, D. E. Ferreira; Ferrer, A.; Ferrere, D.; Ferretti, C.; Parodi, A. Ferretto; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, G.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Castillo, L. R. Flores; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Torregrosa, E. Fullana; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Walls, F. M. Garay; García, C.; Navarro, J. E. García; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Bravo, A. Gascon; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Costa, J. Goncalves Pinto Firmino Da; Gonella, L.; Gongadze, A.; de la Hoz, S. González; Parra, G. Gonzalez; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Ortiz, N. G. Gutierrez; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Correia, A. M. Henriques; Henrot-Versille, S.; Herbert, G. H.; Jiménez, Y. Hernández; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Quiles, A. Irles; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Ponce, J. M. Iturbe; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Pena, J. Jimenez; Jin, S.; Jinaru, A.; Jinnouchi, O.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Rozas, A. Juste; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohriki, T.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; Rosa, A. La; Navarro, J. L. La Rosa; Rotonda, L. La; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Manghi, F. Lasagni; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Dortz, O. Le; Guirriec, E. Le; Menedeu, E. Le; Quilleuc, E. P. Le; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Miotto, G. Lehmann; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Merino, J. Llorente; Lloyd, S. L.; Sterzo, F. Lo; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Mateos, D. Lopez; Paredes, B. Lopez; Paz, I. Lopez; Solis, A. Lopez; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Miguens, J. Machado; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Andrade Filho, L. Manhaes de; Ramos, J. Manjarres; Mann, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Latour, B. Martin dit; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Fadden, N. C. Mc; Goldrick, G. Mc; Kee, S. P. Mc; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Garcia, B. R. Mellado; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Theenhausen, H. Meyer Zu; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Berlingen, J. Montejo; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Llácer, M. Moreno; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Sanchez, F. J. Munoz; Quijada, J. A. Murillo; Murray, W. J.; Murrone, A.; Musheghyan, H.; Muskinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Garcia, R. F. Naranjo; Narayan, R.; Villar, D. I. Narrias; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Seabra, L. F. Oleiro; Pino, S. A. Olivares; Damazio, D. Oliveira; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pages, A. Pacheco; Aranda, C. Padilla; Pagáčová, M.; Griso, S. Pagan; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Panagiotopoulou, E. St.; Pandini, C. E.; Vazquez, J. G. Panduro; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Hernandez, D. Paredes; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Lopez, S. Pedraza; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Codina, E. Perez; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Astigarraga, M. E. Pozo; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Perez, A. Rodriguez; Rodriguez, D. Rodriguez; Roe, S.; Rogan, C. S.; Røhne, O.; Romaniouk, A.; Romano, M.; Saez, S. M. Romano; Adam, E. Romero; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Tehrani, F. Safai; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Loyola, J. E. Salazar; Salek, D.; De Bruin, P. H. Sales; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Martinez, V. Sanchez; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Castillo, I. Santoyo; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Saadi, D. Shoaleh; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Sanchez, C. A. Solans; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Denis, R. D. St.; Stabile, A.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Araya, S. Tapia; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Delgado, A. Tavares; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Kate, H. Ten; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Torres, R. E. Ticse; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Pastor, E. Torró; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ueno, R.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Santurio, E. Valdes; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Ferrer, J. A. Valls; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vazeille, F.; Schroeder, T. Vazquez; Veatch, J.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Vigne, R.; Villa, M.; Perez, M. Villaplana; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Milosavljevic, M. Vranjes; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Wong, K. H. Yau; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Nedden, M. zur; Zurzolo, G.; Zwalinski, L.
2016-12-01
A test of CP invariance in Higgs boson production via vector-boson fusion using the method of the Optimal Observable is presented. The analysis exploits the decay mode of the Higgs boson into a pair of τ leptons and is based on 20.3 fb^{-1} of proton-proton collision data at √{s} = 8 TeV collected by the ATLAS experiment at the LHC. Contributions from CP-violating interactions between the Higgs boson and electroweak gauge bosons are described in an effective field theory framework, in which the strength of CP violation is governed by a single parameter tilde{d}. The mean values and distributions of CP-odd observables agree with the expectation in the Standard Model and show no sign of CP violation. The CP-mixing parameter tilde{d} is constrained to the interval (-0.11,0.05) at 68% confidence level, consistent with the Standard Model expectation of tilde{d}=0.
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
Stucke, Kathrin; Kieser, Meinhard
2012-12-10
In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
NASA Technical Reports Server (NTRS)
Bok, L. D.
1973-01-01
The study included material selection and trade-off for the structural components of the wheel and brake optimizing weight vs cost and feasibility for the space shuttle type application. Analytical methods were used to determine section thickness for various materials, and a table was constructed showing weight vs. cost trade-off. The wheel and brake were further optimized by considering design philosophies that deviate from standard aircraft specifications, and designs that best utilize the materials being considered.
Optimizing area under the ROC curve using semi-supervised learning
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.
2014-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Ibrahim, Shewkar E; Sayed, Tarek; Ismail, Karim
2012-11-01
Several earlier studies have noted the shortcomings with existing geometric design guides which provide deterministic standards. In these standards the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from the standards. To mitigate these shortcomings, probabilistic geometric design has been advocated where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a mechanism for risk measurement to evaluate the safety impact of deviations from design standards. This paper applies reliability analysis for optimizing the safety of highway cross-sections. The paper presents an original methodology to select a suitable combination of cross-section elements with restricted sight distance to result in reduced collisions and consistent risk levels. The purpose of this optimization method is to provide designers with a proactive approach to the design of cross-section elements in order to (i) minimize the risk associated with restricted sight distance, (ii) balance the risk across the two carriageways of the highway, and (iii) reduce the expected collision frequency. A case study involving nine cross-sections that are parts of two major highway developments in British Columbia, Canada, was presented. The results showed that an additional reduction in collisions can be realized by incorporating the reliability component, P(nc) (denoting the probability of non-compliance), in the optimization process. The proposed approach results in reduced and consistent risk levels for both travel directions in addition to further collision reductions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-01-01
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490
O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A
2014-10-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. Copyright © 2014 Elsevier Ltd. All rights reserved.
O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.
2014-01-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-03-26
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.
Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang
2018-01-01
Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.
Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L
2012-07-06
A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Optimal GHZ Paradox for Three Qubits
NASA Astrophysics Data System (ADS)
Ren, Changliang; Su, Hong-Yi; Xu, Zhen-Peng; Wu, Chunfeng; Chen, Jing-Ling
2015-08-01
Quatum nonlocality as a valuable resource is of vital importance in quantum information processing. The characterization of the resource has been extensively investigated mainly for pure states, while relatively less is know for mixed states. Here we prove the existence of the optimal GHZ paradox by using a novel and simple method to extract an optimal state that can saturate the tradeoff relation between quantum nonlocality and the state purity. In this paradox, the logical inequality which is formulated by the GHZ-typed event probabilities can be violated maximally by the optimal state for any fixed amount of purity (or mixedness). Moreover, the optimal state can be described as a standard GHZ state suffering flipped color noise. The maximal amount of noise that the optimal state can resist is 50%. We suggest our result to be a step toward deeper understanding of the role played by the AVN proof of quantum nonlocality as a useful physical resource.
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-06-01
In order to eliminate stress-related wrinkles in cable-suspended membrane structures and to provide simple and reliable deployment, this study presents a multi-material topology optimization model and an effective solution procedure for generating optimal connected layouts for membranes and cables. On the basis of the principal stress criterion of membrane wrinkling behavior and the density-based interpolation of multi-phase materials, the optimization objective is to maximize the total structural stiffness while satisfying principal stress constraints and specified material volume requirements. By adopting the cosine-type relaxation scheme to avoid the stress singularity phenomenon, the optimization model is successfully solved through a standard gradient-based algorithm. Four-corner tensioned membrane structures with different loading cases were investigated to demonstrate the effectiveness of the proposed method in automatically finding the optimal design composed of curved boundary cables and wrinkle-free membranes.
NASA Astrophysics Data System (ADS)
Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke
2018-03-01
For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.
Homann, Stefanie; Hofmann, Christian; Gorin, Aleksandr M.; Nguyen, Huy Cong Xuan; Huynh, Diana; Hamid, Phillip; Maithel, Neil; Yacoubian, Vahe; Mu, Wenli; Kossyvakis, Athanasios; Sen Roy, Shubhendu; Yang, Otto Orlean
2017-01-01
Transfection is one of the most frequently used techniques in molecular biology that is also applicable for gene therapy studies in humans. One of the biggest challenges to investigate the protein function and interaction in gene therapy studies is to have reliable monospecific detection reagents, particularly antibodies, for all human gene products. Thus, a reliable method that can optimize transfection efficiency based on not only expression of the target protein of interest but also the uptake of the nucleic acid plasmid, can be an important tool in molecular biology. Here, we present a simple, rapid and robust flow cytometric method that can be used as a tool to optimize transfection efficiency at the single cell level while overcoming limitations of prior established methods that quantify transfection efficiency. By using optimized ratios of transfection reagent and a nucleic acid (DNA or RNA) vector directly labeled with a fluorochrome, this method can be used as a tool to simultaneously quantify cellular toxicity of different transfection reagents, the amount of nucleic acid plasmid that cells have taken up during transfection as well as the amount of the encoded expressed protein. Finally, we demonstrate that this method is reproducible, can be standardized and can reliably and rapidly quantify transfection efficiency, reducing assay costs and increasing throughput while increasing data robustness. PMID:28863132
Optimized Signaling Method for High-Speed Transmission Channels with Higher Order Transfer Function
NASA Astrophysics Data System (ADS)
Ševčík, Břetislav; Brančík, Lubomír; Kubíček, Michal
2017-08-01
In this paper, the selected results from testing of optimized CMOS friendly signaling method for high-speed communications over cables and printed circuit boards (PCBs) are presented and discussed. The proposed signaling scheme uses modified concept of pulse width modulated (PWM) signal which enables to better equalize significant channel losses during data high-speed transmission. Thus, the very effective signaling method to overcome losses in transmission channels with higher order transfer function, typical for long cables and multilayer PCBs, is clearly analyzed in the time and frequency domain. Experimental results of the measurements include the performance comparison of conventional PWM scheme and clearly show the great potential of the modified signaling method for use in low power CMOS friendly equalization circuits, commonly considered in modern communication standards as PCI-Express, SATA or in Multi-gigabit SerDes interconnects.
Wells, David B; Bhattacharya, Swati; Carr, Rogan; Maffeo, Christopher; Ho, Anthony; Comer, Jeffrey; Aksimentiev, Aleksei
2012-01-01
Molecular dynamics (MD) simulations have become a standard method for the rational design and interpretation of experimental studies of DNA translocation through nanopores. The MD method, however, offers a multitude of algorithms, parameters, and other protocol choices that can affect the accuracy of the resulting data as well as computational efficiency. In this chapter, we examine the most popular choices offered by the MD method, seeking an optimal set of parameters that enable the most computationally efficient and accurate simulations of DNA and ion transport through biological nanopores. In particular, we examine the influence of short-range cutoff, integration timestep and force field parameters on the temperature and concentration dependence of bulk ion conductivity, ion pairing, ion solvation energy, DNA structure, DNA-ion interactions, and the ionic current through a nanopore.
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Toward the Standardization of Biochar Analysis: The COST Action TD1107 Interlaboratory Comparison.
Bachmann, Hans Jörg; Bucheli, Thomas D; Dieguez-Alonso, Alba; Fabbri, Daniele; Knicker, Heike; Schmidt, Hans-Peter; Ulbricht, Axel; Becker, Roland; Buscaroli, Alessandro; Buerge, Diane; Cross, Andrew; Dickinson, Dane; Enders, Akio; Esteves, Valdemar I; Evangelou, Michael W H; Fellet, Guido; Friedrich, Kevin; Gasco Guerrero, Gabriel; Glaser, Bruno; Hanke, Ulrich M; Hanley, Kelly; Hilber, Isabel; Kalderis, Dimitrios; Leifeld, Jens; Masek, Ondrej; Mumme, Jan; Carmona, Marina Paneque; Calvelo Pereira, Roberto; Rees, Frederic; Rombolà, Alessandro G; de la Rosa, José Maria; Sakrabani, Ruben; Sohi, Saran; Soja, Gerhard; Valagussa, Massimo; Verheijen, Frank; Zehetner, Franz
2016-01-20
Biochar produced by pyrolysis of organic residues is increasingly used for soil amendment and many other applications. However, analytical methods for its physical and chemical characterization are yet far from being specifically adapted, optimized, and standardized. Therefore, COST Action TD1107 conducted an interlaboratory comparison in which 22 laboratories from 12 countries analyzed three different types of biochar for 38 physical-chemical parameters (macro- and microelements, heavy metals, polycyclic aromatic hydrocarbons, pH, electrical conductivity, and specific surface area) with their preferential methods. The data were evaluated in detail using professional interlaboratory testing software. Whereas intralaboratory repeatability was generally good or at least acceptable, interlaboratory reproducibility was mostly not (20% < mean reproducibility standard deviation < 460%). This paper contributes to better comparability of biochar data published already and provides recommendations to improve and harmonize specific methods for biochar analysis in the future.
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
Olivieri, Laura J; Cross, Russell R; O'Brien, Kendall E; Ratnayaka, Kanishka; Hansen, Michael S
2015-09-01
Cardiac magnetic resonance (MR) imaging is a valuable tool in congenital heart disease; however patients frequently have metal devices in the chest from the treatment of their disease that complicate imaging. Methods are needed to improve imaging around metal implants near the heart. Basic sequence parameter manipulations have the potential to minimize artifact while limiting effects on image resolution and quality. Our objective was to design cine and static cardiac imaging sequences to minimize metal artifact while maintaining image quality. Using systematic variation of standard imaging parameters on a fluid-filled phantom containing commonly used metal cardiac devices, we developed optimized sequences for steady-state free precession (SSFP), gradient recalled echo (GRE) cine imaging, and turbo spin-echo (TSE) black-blood imaging. We imaged 17 consecutive patients undergoing routine cardiac MR with 25 metal implants of various origins using both standard and optimized imaging protocols for a given slice position. We rated images for quality and metal artifact size by measuring metal artifact in two orthogonal planes within the image. All metal artifacts were reduced with optimized imaging. The average metal artifact reduction for the optimized SSFP cine was 1.5+/-1.8 mm, and for the optimized GRE cine the reduction was 4.6+/-4.5 mm (P < 0.05). Quality ratings favored the optimized GRE cine. Similarly, the average metal artifact reduction for the optimized TSE images was 1.6+/-1.7 mm (P < 0.05), and quality ratings favored the optimized TSE imaging. Imaging sequences tailored to minimize metal artifact are easily created by modifying basic sequence parameters, and images are superior to standard imaging sequences in both quality and artifact size. Specifically, for optimized cine imaging a GRE sequence should be used with settings that favor short echo time, i.e. flow compensation off, weak asymmetrical echo and a relatively high receiver bandwidth. For static black-blood imaging, a TSE sequence should be used with fat saturation turned off and high receiver bandwidth.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Investigation of interaction between magnetic silica particles and lambda phage DNA fragment.
Smerkova, Kristyna; Dostalova, Simona; Vaculovicova, Marketa; Kynicky, Jindrich; Trnkova, Libuse; Kralik, Miroslav; Adam, Vojtech; Hubalek, Jaromir; Provaznik, Ivo; Kizek, Rene
2013-12-01
Nucleic acids belong to the most important molecules and therefore the understanding of their properties, function and behavior is crucial. Even though a range of analytical and biochemical methods have been developed for this purpose, one common step is essential for all of them - isolation of the nucleic acid from the from complex sample matrix. The use of magnetic particles for the separation of nucleic acids has many advantages over other isolation methods. In this study, an isolation procedure for extraction of DNA was optimized. Each step of the isolation process including washing, immobilization and elution was optimized and therefore the efficiency was increased from 1.7% to 28.7% and the total time was shortened from 75 to 30min comparing to the previously described method. Quantification of the particular parameter influence was performed by square-wave voltammetry using hanging drop mercury electrode. Further, we compared the optimized method with standard chloroform extraction and applied on isolation of DNA from Staphylococcus aureus and Escherichia coli. Copyright © 2013 Elsevier B.V. All rights reserved.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Yin, Shan; Guo, Pan; Hai, Dafu; Xu, Li; Shu, Jiale; Zhang, Wenjin; Khan, Muhammad Idrees; Kurland, Irwin J; Qiu, Yunping; Liu, Yumin
2017-12-01
In this paper, an optimized method based on gas chromatography/time-of-flight mass spectrometry (GC-TOFMS) platform has been developed for the analysis of gut microbial-host related co-metabolites in fecal samples. The optimization was performed with proportion of chloroform (C), methanol (M) and water (W) for the extraction of specific metabolic pathways of interest. Loading Bi-plots from the PLS regression model revealed that high concentration of chloroform emphasized the extraction of short chain fatty acids and TCA intermediates, while the higher concentration of methanol emphasized indole and phenyl derivatives. Low level of organic solution emphasized some TCA intermediates but not for indole and phenyl species. The highest sum of the peak area and the distribution of metabolites corresponded to the extraction of methanol/chloroform/water of 225:75:300 (v/v/v), which was then selected for method validation and utilized in our application. Excellent linearity was obtained with 62 reference standards representing different classes of gut microbial-host related co-metabolites, with correlation coefficients (r 2 ) higher than 0.99. Limit of detections (LODs) and limit of qualifications (LOQs) for these standards were below 0.9 nmol and 1.6 nmol, respectively. The reproducibility and repeatability of the majority of tested metabolites in fecal samples were observed with RSDs lower than 15%. Chinese rhubarb-treated rats had elevated indole and phenyl species, and decreased levels of polyamine such as putrescine, and several amino acids. Our optimized method has revealed host-microbe relationships of potential importance for intestinal microbial metabolite receptors such as pregnane X receptor (PXR) and aryl hydrocarbon receptor (AHR) activity, and for enzymes such as ornithine decarboxylase (ODC). Copyright © 2017 Elsevier B.V. All rights reserved.
Tavakoli, Paniz; Campbell, Kenneth
2016-10-01
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model
NASA Technical Reports Server (NTRS)
Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.
2010-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.
Wirtz, M A; Strohmer, J
2016-06-01
In order to develop and evaluate interventions in rehabilitation research a wide range of empirical research methods may be adopted. Qualitative research methods emphasize the relevance of an open research focus and a natural proximity to research objects. Accordingly, using qualitative methods special benefits may arise if researchers strive to identify and organize unknown information aspects (inductive purpose). Particularly, quantitative research methods require a high degree of standardization and transparency of the research process. Furthermore, a clear definition of efficacy and effectiveness exists (deductive purpose). These paradigmatic approaches are characterized by almost opposite key characteristics, application standards, purposes and quality criteria. Hence, specific aspects have to be regarded if researchers aim to select or combine those approaches in order to ensure an optimal gain in knowledge. © Georg Thieme Verlag KG Stuttgart · New York.
Optimal Hotspots of Dynamic Surfaced-Enhanced Raman Spectroscopy for Drugs Quantitative Detection.
Yan, Xiunan; Li, Pan; Zhou, Binbin; Tang, Xianghu; Li, Xiaoyun; Weng, Shizhuang; Yang, Liangbao; Liu, Jinhuai
2017-05-02
Surface-enhanced Raman spectroscopy (SERS) as a powerful qualitative analysis method has been widely applied in many fields. However, SERS for quantitative analysis still suffers from several challenges partially because of the absence of stable and credible analytical strategy. Here, we demonstrate that the optimal hotspots created from dynamic surfaced-enhanced Raman spectroscopy (D-SERS) can be used for quantitative SERS measurements. In situ small-angle X-ray scattering was carried out to in situ real-time monitor the formation of the optimal hotspots, where the optimal hotspots with the most efficient hotspots were generated during the monodisperse Au-sol evaporating process. Importantly, the natural evaporation of Au-sol avoids the nanoparticles instability of salt-induced, and formation of ordered three-dimensional hotspots allows SERS detection with excellent reproducibility. Considering SERS signal variability in the D-SERS process, 4-mercaptopyridine (4-mpy) acted as internal standard to validly correct and improve stability as well as reduce fluctuation of signals. The strongest SERS spectra at the optimal hotspots of D-SERS have been extracted to statistics analysis. By using the SERS signal of 4-mpy as a stable internal calibration standard, the relative SERS intensity of target molecules demonstrated a linear response versus the negative logarithm of concentrations at the point of strongest SERS signals, which illustrates the great potential for quantitative analysis. The public drugs 3,4-methylenedioxymethamphetamine and α-methyltryptamine hydrochloride obtained precise analysis with internal standard D-SERS strategy. As a consequence, one has reason to believe our approach is promising to challenge quantitative problems in conventional SERS analysis.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
Reichert, Bárbara; de Kok, André; Pizzutti, Ionara Regina; Scholten, Jos; Cardoso, Carmem Dickow; Spanjer, Martien
2018-04-03
This paper describes the optimization and validation of an acetonitrile based method for simultaneous extraction of multiple pesticides and mycotoxins from raw coffee beans followed by LC-ESI-MS/MS determination. Before extraction, the raw coffee samples were milled and then slurried with water. The slurried samples were spiked with two separate standard solutions, one containing 131 pesticides and a second with 35 mycotoxins, which were divided into 3 groups of different relative concentration levels. Optimization of the QuEChERS approach included performance tests with acetonitrile acidified with acetic acid or formic acid, with or without buffer and with or without clean-up of the extracts before LC-ESI-MS/MS analysis. For the clean-up step, seven d-SPE sorbents and their various mixtures were evaluated. After method optimization a complete validation study was carried out to ensure adequate performance of the extraction and chromatographic methods. The samples were spiked at 3 concentrations levels with both mycotoxins and pesticides (with 6 replicates at each level, n = 6) and then submitted to the extraction procedure. Before LC-ESI-MS/MS analysis, the acetonitrile extracts were diluted 2-fold with methanol, in order to improve the chromatographic performance of the early-eluting polar analytes. Calibration standard solutions were prepared in organic solvent and in blank coffee extract at 7 concentration levels and analyzed 6 times each. The method was assessed for accuracy (recovery %), precision (RSD%), selectivity, linearity (r 2 ), limit of quantification (LOQ) and matrix effects (%). Copyright © 2017 Elsevier B.V. All rights reserved.
Does unbelted safety requirement affect protection for belted occupants?
Hu, Jingwen; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A C; Narayanaswamy, Prabha; Reed, Matthew P; Andreen, Margaret; Neal, Mark; Lin, Chin-Hsu
2017-05-29
Federal regulations in the United States require vehicles to meet occupant performance requirements with unbelted test dummies. Removing the test requirements with unbelted occupants might encourage the deployment of seat belt interlocks and allow restraint optimization to focus on belted occupants. The objective of this study is to compare the performance of restraint systems optimized for belted-only occupants with those optimized for both belted and unbelted occupants using computer simulations and field crash data analyses. In this study, 2 validated finite element (FE) vehicle/occupant models (a midsize sedan and a midsize SUV) were selected. Restraint design optimizations under standardized crash conditions (U.S.-NCAP and FMVSS 208) with and without unbelted requirements were conducted using Hybrid III (HIII) small female and midsize male anthropomorphic test devices (ATDs) in both vehicles on both driver and right front passenger positions. A total of 10 to 12 design parameters were varied in each optimization using a combination of response surface method (RSM) and genetic algorithm. To evaluate the field performance of restraints optimized with and without unbelted requirements, 55 frontal crash conditions covering a greater variety of crash types than those in the standardized crashes were selected. A total of 1,760 FE simulations were conducted for the field performance evaluation. Frontal crashes in the NASS-CDS database from 2002 to 2012 were used to develop injury risk curves and to provide the baseline performance of current restraint system and estimate the injury risk change by removing the unbelted requirement. Unbelted requirements do not affect the optimal seat belt and airbag design parameters in 3 out of 4 vehicle/occupant position conditions, except for the SUV passenger side. Overall, compared to the optimal designs with unbelted requirements, optimal designs without unbelted requirements generated the same or lower total injury risks for belted occupants depending on statistical methods used for the analysis, but they could also increase the total injury risks for unbelted occupants. This study demonstrated potential for reducing injury risks to belted occupants if the unbelted requirements are eliminated. Further investigations are necessary to confirm these findings.
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
Balest, Lydia; Murgolo, Sapia; Sciancalepore, Lucia; Montemurro, Patrizia; Abis, Pier Paolo; Pastore, Carlo; Mascolo, Giuseppe
2016-06-01
An on-line solid phase extraction coupled with high-performance liquid chromatography in tandem with mass spectrometry (on-line SPE/HPLC/MS-MS) method for the determination of five microcystins and nodularin in surface waters at submicrogram per liter concentrations has been optimized. Maximum recoveries were achieved by carefully optimizing the extraction sample volume, loading solvent, wash solvent, and pH of the sample. The developed method was also validated according to both UNI EN ISO IEC 17025 and UNICHIM guidelines. Specifically, ten analytical runs were performed at three different concentration levels using a reference mix solution containing the six analytes. The method was applied for monitoring the concentrations of microcystins and nodularin in real surface water during a sampling campaign of 9 months in which the ELISA method was used as standard official method. The results of the two methods were compared showing good agreement when the highest concentration values of MCs were found. Graphical abstract An on-line SPE/HPLC/MS-MS method for the determination of five microcystins and nodularin in surface waters at sub μg L(-1) was optimized and compared with ELISA assay method for real samples.
Selecting Items for Criterion-Referenced Tests.
ERIC Educational Resources Information Center
Mellenbergh, Gideon J.; van der Linden, Wim J.
1982-01-01
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Solving Rational Expectations Models Using Excel
ERIC Educational Resources Information Center
Strulik, Holger
2004-01-01
Simple problems of discrete-time optimal control can be solved using a standard spreadsheet software. The employed-solution method of backward iteration is intuitively understandable, does not require any programming skills, and is easy to implement so that it is suitable for classroom exercises with rational-expectations models. The author…
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
A novel neural network for the synthesis of antennas and microwave devices.
Delgado, Heriberto Jose; Thursby, Michael H; Ham, Fredric M
2005-11-01
A novel artificial neural network (SYNTHESIS-ANN) is presented, which has been designed for computationally intensive problems and applied to the optimization of antennas and microwave devices. The antenna example presented is optimized with respect to voltage standing-wave ratio, bandwidth, and frequency of operation. A simple microstrip transmission line problem is used to further describe the ANN effectiveness, in which microstrip line width is optimized with respect to line impedance. The ANNs exploit a unique number representation of input and output data in conjunction with a more standard neural network architecture. An ANN consisting of a heteroassociative memory provided a very efficient method of computing necessary geometrical values for the antenna when used in conjunction with a new randomization process. The number representation used provides significant insight into this new method of fault-tolerant computing. Further work is needed to evaluate the potential of this new paradigm.
Generalized rules for the optimization of elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Eyal, Eran; Bahar, Ivet
2009-03-01
Elastic network models (ENMs) are widely employed for approximating the coarse-grained equilibrium dynamics of proteins using only a few parameters. An area of current focus is improving the predictive accuracy of ENMs by fine-tuning their force constants to fit specific systems. Here we introduce a set of general rules for assigning ENM force constants to residue pairs. Using a novel method, we construct ENMs that optimally reproduce experimental residue covariances from NMR models of 68 proteins. We analyze the optimal interactions in terms of amino acid types, pair distances and local protein structures to identify key factors in determining the effective spring constants. When applied to several unrelated globular proteins, our method shows an improved correlation with experiment over a standard ENM. We discuss the physical interpretation of our findings as well as its implications in the fields of protein folding and dynamics.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
NASA Astrophysics Data System (ADS)
Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2015-03-01
We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.
Detection of fatigue cracks by nondestructive testing methods
NASA Technical Reports Server (NTRS)
Anderson, R. T.; Delacy, T. J.; Stewart, R. C.
1973-01-01
The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.
SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Spaans, J
Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less
NASA Astrophysics Data System (ADS)
Wisnuadi, Alief Regyan; Damayanti, Retno Wulan; Pujiyanto, Eko
2018-02-01
Bearing is one of the most widely used parts in automotive industry. One of the leading bearing manufacturing companies in the world is SKF Indonesia. This company must produce bearing with international standard. SKF Indonesia must do continuous improvement in order to face competition. During this time, SKF Indonesia is only performing quality control at its Quality Assurance department. In other words, quality improvement at SKF Indonesia has not been done thoroughly. The purpose of this research is to improve quality of outer ring product at SKF Indonesia by conducting an internal grinding process experiment about setting speed ratio, fine position, and spark out grinding time. The specific purpose of this experiment is to optimize some quality responses such as roughness, roundness, and cycle time. All of the response in this experiment were smaller the better. Taguchi method and PCR-TOPSIS are used for the optimization process. The result of this research shows that by using Taguchi method and PCR-TOPSIS, the optimum condition occurs on speed ratio 36, fine position 18 µm/s and spark out 0.5 s. The optimum conditions result were roughness 0.398 µm, roundness 1.78 µm and cycle time 8.1 s. This results have been better than the previous results and meet the standards. The roughness of 0.523 µm decrease to 0.398 µm and the average cycle time of 8.5 s decrease to 8.1 s.
Optimization and standardization of pavement management processes.
DOT National Transportation Integrated Search
2004-08-01
This report addresses issues related to optimization and standardization of current pavement management processes in Kentucky. Historical pavement management records were analyzed, which indicates that standardization is necessary in future pavement ...
Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Pinto, Terezinha de Jesus Andreoli; Lourenço, Felipe Rebello
2014-08-01
Microbiological assays have been used to evaluate antimicrobial activity since the discovery of the first antibiotics. Despite their limitations, microbiological assays are widely employed to determine antibiotic potency of pharmaceutical dosage forms, since they provide a measure of biological activity. The aim of this work is to develop, optimize and validate a rapid colorimetric microplate bioassay for the potency of neomycin in pharmaceutical drug products. Factorial and response surface methodologies were used in the development and optimization of the choice of microorganism, culture medium composition, amount of inoculum, triphenyltetrazolium chloride (TTC) concentration and neomycin concentration. The optimized bioassay method was validated by the assessment of linearity (range 3.0 to 5.0μg/mL, r=0.998 and 0.994 for standard and sample curves, respectively), precision (relative standard deviation (RSD) of 2.8% and 4.0 for repeatability and intermediate precision, respectively), accuracy (mean recovery=100.2%) and robustness. Statistical analysis showed equivalency between agar diffusion microbiological assay and rapid colorimetric microplate bioassay. In addition, microplate bioassay had advantages concerning the sensitivity of response, time of incubation, and amount of culture medium and solutions required. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Böning, Guido; Todica, Andrei; Vai, Alessandro; Lehner, Sebastian; Xiong, Guoming; Mille, Erik; Ilhan, Harun; la Fougère, Christian; Bartenstein, Peter; Hacker, Marcus
2013-11-01
The assessment of left ventricular function, wall motion and myocardial viability using electrocardiogram (ECG)-gated [18F]-FDG positron emission tomography (PET) is widely accepted in human and in preclinical small animal studies. The nonterminal and noninvasive approach permits repeated in vivo evaluations of the same animal, facilitating the assessment of temporal changes in disease or therapy response. Although well established, gated small animal PET studies can contain erroneous gating information, which may yield to blurred images and false estimation of functional parameters. In this work, we present quantitative and visual quality control (QC) methods to evaluate the accuracy of trigger events in PET list-mode and physiological data. Left ventricular functional analysis is performed to quantify the effect of gating errors on the end-systolic and end-diastolic volumes, and on the ejection fraction (EF). We aim to recover the cardiac functional parameters by the application of the commonly established heart rate filter approach using fixed ranges based on a standardized population. In addition, we propose a fully reprocessing approach which retrospectively replaces the gating information of the PET list-mode file with appropriate list-mode decoding and encoding software. The signal of a simultaneously acquired ECG is processed using standard MATLAB vector functions, which can be individually adapted to reliably detect the R-peaks. Finally, the new trigger events are inserted into the PET list-mode file. A population of 30 mice with various health statuses was analyzed and standard cardiac parameters such as mean heart rate (119 ms ± 11.8 ms) and mean heart rate variability (1.7 ms ± 3.4 ms) derived. These standard parameter ranges were taken into account in the QC methods to select a group of nine optimal gated and a group of eight sub-optimal gated [18F]-FDG PET scans of mice from our archive. From the list-mode files of the optimal gated group, we randomly deleted various fractions (5% to 60%) of contained trigger events to generate a corrupted group. The filter approach was capable to correct the corrupted group and yield functional parameters with no significant difference to the optimal gated group. We successfully demonstrated the potential of the fully reprocessing approach by applying it to the sub-optimal group, where the functional parameters were significantly improved after reprocessing (mean EF from 41% ± 16% to 60% ± 13%). When applied to the optimal gated group the fully reprocessing approach did not alter the functional parameters significantly (mean EF from 64% ± 8% to 64 ± 7%). This work presents methods to determine and quantify erroneous gating in small animal gated [18F]-FDG PET scans. We demonstrate the importance of a quality check for cardiac triggering contained in PET list-mode data and the benefit of optionally reprocessing the fully recorded physiological information to retrospectively modify or fully replace the cardiac triggering in PET list-mode data. We aim to provide a preliminary guideline of how to proceed in the presence of errors and demonstrate that offline reprocessing by filtering erroneous trigger events and retrospective gating by ECG processing is feasible. Future work will focus on the extension by additional QC methods, which may exploit the amplitude of trigger events and ECG signal by means of pattern recognition. Furthermore, we aim to transfer the proposed QC methods and the fully reprocessing approach to human myocardial PET/CT.
Wang, Hongbin; Hu, Gaofei; Zhang, Yongqian; Yuan, Zheng; Zhao, Xuan; Zhu, Yong; Cai, De; Li, Yujuan; Xiao, Shengyuan; Deng, Yulin
2010-07-15
The post-digestion (18)O labeling method decouples protein digestion and peptide labeling. This method allows labeling conditions to be optimized separately and increases labeling efficiency. A common method for protein denaturation in proteomics is the use of urea. Though some previous studies have used urea-based protein denaturation before post-digestion (18)O labeling, the optimal (18)O labeling conditions in this case have not been yet reported. Present study investigated the effects of urea concentration and pH on the labeling efficiency and obtained an optimized protocol. It was demonstrated that urea inhibited (18)O incorporation depending on concentration. However, a urea concentration between 1 and 2M had minimal effects on labeling. It was also demonstrated that the use of FA to quench the digestion reaction severely affected the labeling efficiency. This study revealed the reason why previous studies gave different optimal pH for labeling. They neglect the effects of different digestion conditions on the labeling conditions. Excellent labeling quality was obtained at the optimized conditions using urea 1-2 M and pH 4.5, 98.4+/-1.9% for a standard protein mixture and 97.2+/-6.2% for a complex biological sample. For a 1:1 mixture analysis of the (16)O- and (18)O-labeled peptides from the same protein sample, the average abundance ratios reached 1.05+/-0.31, demonstrating a good quantitation quality at the optimized conditions. This work will benefit other researchers who pair urea-based protein denaturation with a post-digestion (18)O labeling method. 2010 Elsevier B.V. All rights reserved.
Bashiry, Moein; Mohammadi, Abdorreza; Hosseini, Hedayat; Kamankesh, Marzieh; Aeenehvand, Saeed; Mohammadi, Zaniar
2016-01-01
A novel method based on microwave-assisted extraction and dispersive liquid-liquid microextraction (MAE-DLLME) followed by high-performance liquid chromatography (HPLC) was developed for the determination of three polyamines from turkey breast meat samples. Response surface methodology (RSM) based on central composite design (CCD) was used to optimize the effective factors in DLLME process. The optimum microextraction efficiency was obtained under optimized conditions. The calibration graphs of the proposed method were linear in the range of 20-200 ng g(-1), with the coefficient determination (R(2)) higher than 0.9914. The relative standard deviations were 6.72-7.30% (n = 7). The limits of detection were in the range of 0.8-1.4 ng g(-1). The recoveries of these compounds in spiked turkey breast meat samples were from 95% to 105%. The increased sensitivity in using the MAE-DLLME-HPLC-UV has been demonstrated. Compared with previous methods, the proposed method is an accurate, rapid and reliable sample-pretreatment method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P
2014-04-01
Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.
Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.
Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A
2012-01-03
Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.
On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2011-01-01
This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.
Superfast maximum-likelihood reconstruction for quantum tomography
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Roszkowska, Anna; Tascon, Marcos; Bojko, Barbara; Goryński, Krzysztof; Dos Santos, Pedro Reck; Cypel, Marcelo; Pawliszyn, Janusz
2018-06-01
The fast and sensitive determination of concentrations of anticancer drugs in specific organs can improve the efficacy of chemotherapy and minimize its adverse effects. In this paper, ex vivo solid-phase microextraction (SPME) coupled to LC-MS/MS as a method for rapidly quantitating doxorubicin (DOX) in lung tissue was optimized. Furthermore, the theoretical and practical challenges related to the real-time monitoring of DOX levels in the lung tissue of a living organism (in vivo SPME) are presented. In addition, several parameters for ex vivo/in vivo SPME studies, such as extraction efficiency of autoclaved fibers, intact/homogenized tissue differences, critical tissue amount, and the absence of an internal standard are thoroughly examined. To both accurately quantify DOX in solid tissue and minimize the error related to the lack of an internal standard, a calibration method at equilibrium conditions was chosen. In optimized ex vivo SPME conditions, the targeted compound was extracted by directly introducing a 15 mm (45 µm thickness) mixed-mode fiber into 15 g of homogenized tissue for 20 min, followed by a desorption step in an optimal solvent mixture. The detection limit for DOX was 2.5 µg g -1 of tissue. The optimized ex vivo SPME method was successfully applied for the analysis of DOX in real pig lung biopsies, providing an averaged accuracy and precision of 103.2% and 12.3%, respectively. Additionally, a comparison between SPME and solid-liquid extraction revealed good agreement. The results presented herein demonstrate that the developed SPME method radically simplifies the sample preparation step and eliminates the need for tissue biopsies. These results suggest that SPME can accurately quantify DOX in different tissue compartments and can be potentially useful for monitoring and adjusting drug dosages during chemotherapy in order to achieve effective and safe concentrations of doxorubicin. Copyright © 2018 Elsevier B.V. All rights reserved.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Kalinina, Elizabeth A
2013-08-01
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Functionality limit of classical simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2015-09-01
By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Optimal Control Modification Adaptive Law for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Optimal Control Modification for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Gao, Qun; Liu, Yan; Li, Hao; Chen, Hui; Chai, Yifeng; Lu, Feng
2014-06-01
Some expired drugs are difficult to detect by conventional means. If they are repackaged and sold back into market, they will constitute a new public health challenge. For the detection of repackaged expired drugs within specification, paracetamol tablet from a manufacturer was used as a model drug in this study for comparison of Raman spectra-based library verification and classification methods. Raman spectra of different batches of paracetamol tablets were collected and a library including standard spectra of unexpired batches of tablets was established. The Raman spectrum of each sample was identified by cosine and correlation with the standard spectrum. The average HQI of the suspicious samples and the standard spectrum were calculated. The optimum threshold values were 0.997 and 0.998 respectively as a result of ROC and four evaluations, for which the accuracy was up to 97%. Three supervised classifiers, PLS-DA, SVM and k-NN, were chosen to establish two-class classification models and compared subsequently. They were used to establish a classification of expired batches and an unexpired batch, and predict the suspect samples. The average accuracy was 90.12%, 96.80% and 89.37% respectively. Different pre-processing techniques were tried to find that first derivative was optimal for methods of libraries and max-min normalization was optimal for that of classifiers. The results obtained from these studies indicated both libraries and classifier methods could detect the expired drugs effectively, and they should be used complementarily in the fast-screening. Copyright © 2014 Elsevier B.V. All rights reserved.
Quantitative assessment in thermal image segmentation for artistic objects
NASA Astrophysics Data System (ADS)
Yousefi, Bardia; Sfarra, Stefano; Maldague, Xavier P. V.
2017-07-01
The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve Non-destructive Testing (NDT), Medical analysis (Computer Aid Diagnosis/Detection- CAD), Arts and Archaeology among many others. In the arts and archaeology field, infrared technology provides significant contributions in term of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard Non-Negative Matrix Factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by Non-negative least squares (NNLS) active-set algorithm (SNMF2) and eigen decomposition approaches such as Principal Component Thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT) to obtain the thermal features. On one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet based data fusion combines the data of each method with PCT to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue and a fresco were analyzed using the above mentioned methods and interesting results were obtained.
NASA Astrophysics Data System (ADS)
Lily; Laila, L.; Prasetyo, B. E.
2018-03-01
A selective, reproducibility, effective, sensitive, simple and fast High-Performance Liquid Chromatography (HPLC) was developed, optimized and validated to analyze 25-Desacetyl Rifampicin (25-DR) in human urine which is from tuberculosis patient. The separation was performed by HPLC Agilent Technologies with column Agilent Eclipse XDB- Ci8 and amobile phase of 65:35 v/v methanol: 0.01 M sodium phosphate buffer pH 5.2, at 254 nm and flow rate of 0.8ml/min. The mean retention time was 3.016minutes. The method was linear from 2–10μg/ml 25-DR with a correlation coefficient of 0.9978. Standard deviation, relative standard deviation and coefficient variation of 2, 6, 10μg/ml 25-DR were 0-0.0829, 03.1752, 0-0.0317%, respectively. The recovery of 5, 7, 9μg/ml25-DR was 80.8661, 91.3480 and 111.1457%, respectively. Limits of detection (LoD) and quantification (LoQ) were 0.51 and 1.7μg/ml, respectively. The method has fulfilled the validity guidelines of the International Conference on Harmonization (ICH) bioanalytical method which includes parameters of specificity, linearity, precision, accuracy, LoD, and LoQ. The developed method is suitable for pharmacokinetic analysis of various concentrations of 25-DR in human urine.
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
Targeted methods for quantitative analysis of protein glycosylation
Goldman, Radoslav; Sanda, Miloslav
2018-01-01
Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218
Time delayed Ensemble Nudging Method
NASA Astrophysics Data System (ADS)
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang
2015-01-01
Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772
Robbins, Rebecca J; Leonczak, Jadwiga; Johnson, J Christopher; Li, Julia; Kwik-Uribe, Catherine; Prior, Ronald L; Gu, Liwei
2009-06-12
The quantitative parameters and method performance for a normal-phase HPLC separation of flavanols and procyanidins in chocolate and cocoa-containing food products were optimized and assessed. Single laboratory method performance was examined over three months using three separate secondary standards. RSD(r) ranged from 1.9%, 4.5% to 9.0% for cocoa powder, liquor and chocolate samples containing 74.39, 15.47 and 1.87 mg/g flavanols and procyanidins, respectively. Accuracy was determined by comparison to the NIST Standard Reference Material 2384. Inter-lab assessment indicated that variability was quite low for seven different cocoa-containing samples, with a RSD(R) of less than 10% for the range of samples analyzed.
Layout design-based research on optimization and assessment method for shipbuilding workshop
NASA Astrophysics Data System (ADS)
Liu, Yang; Meng, Mei; Liu, Shuang
2013-06-01
The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.
Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li
2014-01-01
Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.
Numerical integration of discontinuous functions: moment fitting and smart octree
NASA Astrophysics Data System (ADS)
Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander
2017-11-01
A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
Hameda, A Ben; Elosta, S; Havel, J
2005-08-19
Huperzine A, natural product from Huperzia serrata, is quite an important compound used to treat the Alzheimer's disease as a food supplement and also proposed as a prospective and prophylactic antidote against organophosphate poisoning. In this work, simple and fast capillary electrophoresis (CE) procedure with UV detection (at 230 nm) for determination of Huperzine A was developed and optimized. Capillary electrophoresis determination of Huperzine A was optimized using a combination of the experimental design (ED) and the artificial neural networks (ANN). In the first stage of optimization, the experiments were done according to the appropriate ED. Data evaluated by ANN allowed finding the optimal values of several analytical parameters (peak area, peak height, and analysis time). Optimal conditions found were 50 mM acetate buffer, pH 4.6, separation voltage 10 kV, hydrodynamic injection time 10 s and temperature 25 degrees C. The developed method shows good repeatability as relative standard division (R.S.D. = 0.9%) and it has been applied for determination of Huperzine A in various pharmaceutical products and in biological liquids. The limit of detection (LOD) in aqueous media was 0.226 ng/ml and 0.233 ng/ml for determination in the serum.
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2017-10-01
The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.
Gkionis, Konstantinos; Kruse, Holger; Šponer, Jiří
2016-04-12
Modern dispersion-corrected DFT methods have made it possible to perform reliable QM studies on complete nucleic acid (NA) building blocks having hundreds of atoms. Such calculations, although still limited to investigations of potential energy surfaces, enhance the portfolio of computational methods applicable to NAs and offer considerably more accurate intrinsic descriptions of NAs than standard MM. However, in practice such calculations are hampered by the use of implicit solvent environments and truncation of the systems. Conventional QM optimizations are spoiled by spurious intramolecular interactions and severe structural deformations. Here we compare two approaches designed to suppress such artifacts: partially restrained continuum solvent QM and explicit solvent QM/MM optimizations. We report geometry relaxations of a set of diverse double-quartet guanine quadruplex (GQ) DNA stems. Both methods provide neat structures without major artifacts. However, each one also has distinct weaknesses. In restrained optimizations, all errors in the target geometries (i.e., low-resolution X-ray and NMR structures) are transferred to the optimized geometries. In QM/MM, the initial solvent configuration causes some heterogeneity in the geometries. Nevertheless, both approaches represent a decisive step forward compared to conventional optimizations. We refine earlier computations that revealed sizable differences in the relative energies of GQ stems computed with AMBER MM and QM. We also explore the dependence of the QM/MM results on the applied computational protocol.
Lightness modification of color image for protanopia and deuteranopia
NASA Astrophysics Data System (ADS)
Tanaka, Go; Suetake, Noriaki; Uchino, Eiji
2010-01-01
In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.
Optimization of droplets for UV-NIL using coarse-grain simulation of resist flow
NASA Astrophysics Data System (ADS)
Sirotkin, Vadim; Svintsov, Alexander; Zaitsev, Sergey
2009-03-01
A mathematical model and numerical method are described, which make it possible to simulate ultraviolet ("step and flash") nanoimprint lithography (UV-NIL) process adequately even using standard Personal Computers. The model is derived from 3D Navier-Stokes equations with the understanding that the resist motion is largely directed along the substrate surface and characterized by ultra-low values of the Reynolds number. By the numerical approximation of the model, a special finite difference method is applied (a coarse-grain method). A coarse-grain modeling tool for detailed analysis of resist spreading in UV-NIL at the structure-scale level is tested. The obtained results demonstrate the high ability of the tool to calculate optimal dispensing for given stamp design and process parameters. This dispensing provides uniform filled areas and a homogeneous residual layer thickness in UV-NIL.
[Optimization of a HPLC determination method for Evodia rutaecarpa].
Huang, Zhifang; Yi, Jinhai; Wu, Yan; Liu, Yunhua; Chen, Yan; Liu, Yuhong
2011-02-01
A HPLC method for determination of limonin, evodiamine and rutaecarpine in Evodia rutaecarpa was optimized. The mobile phase was [acetonitrile-tetrahydrofuran (25: 15)] -0.02% H3 PO4 (35:65). The detection wavelength was 220 nm and the flow rate was 1.0 mL x min(-1). Limonin, evodiamine and rutaecarpine were all well separated from other substances and their UV spectrums were essentially the same to the standards . The liner ranges of limonin, evodiamine and rutaecarpine were 0.196 8-3.936, 0.153 6-3.072, 0.097 4-1.948 microg. The average recoveries were 97.8%, 100.7% and 98.4%. RSD were 1.7%, 1.3% and 1.1% (n = 6). The method of this article is accurate, reproducible and can be used to enhance the quality control of E. rutaecarpa.
Numerical solution of a conspicuous consumption model with constant control delay☆
Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea
2011-01-01
We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871
NASA Astrophysics Data System (ADS)
Kwon, O.; Kim, W.; Kim, J.
2017-12-01
Recently construction of subsea tunnel has been increased globally. For safe construction of subsea tunnel, identifying the geological structure including fault at design and construction stage is more than important. Then unlike the tunnel in land, it's very difficult to obtain the data on geological structure because of the limit in geological survey. This study is intended to challenge such difficulties in a way of developing the technology to identify the geological structure of seabed automatically by using echo sounding data. When investigation a potential site for a deep subsea tunnel, there is the technical and economical limit with borehole of geophysical investigation. On the contrary, echo sounding data is easily obtainable while information reliability is higher comparing to above approaches. This study is aimed at developing the algorithm that identifies the large scale of geological structure of seabed using geostatic approach. This study is based on theory of structural geology that topographic features indicate geological structure. Basic concept of algorithm is outlined as follows; (1) convert the seabed topography to the grid data using echo sounding data, (2) apply the moving window in optimal size to the grid data, (3) estimate the spatial statistics of the grid data in the window area, (4) set the percentile standard of spatial statistics, (5) display the values satisfying the standard on the map, (6) visualize the geological structure on the map. The important elements in this study include optimal size of moving window, kinds of optimal spatial statistics and determination of optimal percentile standard. To determine such optimal elements, a numerous simulations were implemented. Eventually, user program based on R was developed using optimal analysis algorithm. The user program was designed to identify the variations of various spatial statistics. It leads to easy analysis of geological structure depending on variation of spatial statistics by arranging to easily designate the type of spatial statistics and percentile standard. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government. (Project Number: 13 Construction Research T01)
ERIC Educational Resources Information Center
Kane, Michael T.; Mroch, Andrew A.
2010-01-01
In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…
Order, topology and preference
NASA Technical Reports Server (NTRS)
Sertel, M. R.
1971-01-01
Some standard order-related and topological notions, facts, and methods are brought to bear on central topics in the theory of preference and the theory of optimization. Consequences of connectivity are considered, especially from the viewpoint of normally preordered spaces. Examples are given showing how the theory of preference, or utility theory, can be applied to social analysis.
Aad, G.; Abbott, B.; Abdinov, O.; ...
2016-11-28
A test of CP invariance in Higgs boson production via vector-boson fusion using the method of the Optimal Observable is presented. The analysis exploits the decay mode of the Higgs boson into a pair of τ leptons and is based on 20.3 fb –1 of proton–proton collision data at √s = 8 TeV collected by the ATLAS experiment at the LHC. Contributions from CP-violating interactions between the Higgs boson and electroweak gauge bosons are described in an effective field theory framework, in which the strength of CP violation is governed by a single parameter d ~. The mean values andmore » distributions of CP-odd observables agree with the expectation in the Standard Model and show no sign of CP violation. The CP-mixing parameter d ~ is constrained to the interval (–0.11,0.05) at 68% confidence level, consistent with the Standard Model expectation of d ~=0.« less
Tuzen, Mustafa; Pekiner, Ozlem Zeynep
2015-12-01
A rapid and environmentally friendly ultrasound assisted ionic liquid dispersive liquid liquid microextraction (USA-IL-DLLME) was developed for the speciation of inorganic selenium in beverages and total selenium in food samples by using graphite furnace atomic absorption spectrometry. Some analytical parameters including pH, amount of complexing agent, extraction time, volume of ionic liquid, sample volume, etc. were optimized. Matrix effects were also investigated. Enhancement factor (EF) and limit of detection (LOD) for Se(IV) were found to be 150 and 12 ng L(-1), respectively. The relative standard deviation (RSD) was found 4.2%. The accuracy of the method was confirmed with analysis of LGC 6010 Hard drinking water and NIST SRM 1573a Tomato leaves standard reference materials. Optimized method was applied to ice tea, soda and mineral water for the speciation of Se(IV) and Se(VI) and some food samples including beer, cow's milk, red wine, mixed fruit juice, date, apple, orange, grapefruit, egg and honey for the determination of total selenium. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdinov, O.
A test of CP invariance in Higgs boson production via vector-boson fusion using the method of the Optimal Observable is presented. The analysis exploits the decay mode of the Higgs boson into a pair of τ leptons and is based on 20.3 fb –1 of proton–proton collision data at √s = 8 TeV collected by the ATLAS experiment at the LHC. Contributions from CP-violating interactions between the Higgs boson and electroweak gauge bosons are described in an effective field theory framework, in which the strength of CP violation is governed by a single parameter d ~. The mean values andmore » distributions of CP-odd observables agree with the expectation in the Standard Model and show no sign of CP violation. The CP-mixing parameter d ~ is constrained to the interval (–0.11,0.05) at 68% confidence level, consistent with the Standard Model expectation of d ~=0.« less
Aad, G; Abbott, B; Abdinov, O; Abdallah, J; Abeloos, B; Aben, R; Abolins, M; Aben, R; Abolins, M; AbouZeid, O S; Abraham, N L; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Agricola, J; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Verzini, M J Alconada; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alison, J; Alkire, S P; Allbrooke, B M M; Allen, B W; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Gonzalez, B Alvarez; Piqueras, D Álvarez; Alviggi, M G; Amadio, B T; Amako, K; Coutinho, Y Amaral; Amelung, C; Amidei, D; Santos, S P Amor Dos; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Bella, L Aperio; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Armitage, L J; Arnaez, O; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Artz, S; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Augsten, K; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Baca, M J; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Bagiacchi, P; Bagnaia, P; Bai, Y; Baines, J T; Baker, O K; Baldin, E M; Balek, P; Balestri, T; Balli, F; Balunas, W K; Banas, E; Banerjee, Sw; Bannoura, A A E; Barak, L; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Navarro, L Barranco; Barreiro, F; da Costa, J Barreiro Guimarães; Bartoldus, R; Barton, A E; Bartos, P; Basalaev, A; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Bechtle, P; Beck, H P; Becker, K; Becker, M; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bedognetti, M; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, A S; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Belyaev, N L; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Noccioli, E Benhar; Benitez, J; Garcia, J A Benitez; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Kuutmann, E Bergeaas; Berger, N; Berghaus, F; Beringer, J; Berlendis, S; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertram, I A; Bertsche, C; Bertsche, D; Besjes, G J; Bylund, O Bessidskaia; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Biedermann, D; Bielski, R; Biesuz, N V; Biglietti, M; De Mendizabal, J Bilbao; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biondi, S; Bjergaard, D M; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Blunier, S; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Boerner, D; Bogaerts, J A; Bogavac, D; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Bortfeldt, J; Bortoletto, D; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Sola, J D Bossio; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Boutle, S K; Boveia, A; Boyd, J; Boyko, I R; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Madden, W D Breaden; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Broughton, J H; de Renstrom, P A Bruckman; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Brunt, B H; Bruschi, M; Bruscino, N; Bryant, P; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Budagov, I A; Buehrer, F; Bugge, M K; Bulekov, O; Bullock, D; Burckhart, H; Burdin, S; Burgard, C D; Burghgrave, B; Burka, K; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, A R; Urbán, S Cabrera; Caforio, D; Cairo, V M; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Calvet, T P; Toro, R Camacho; Camarda, S; Camarri, P; Cameron, D; Armadans, R Caminal; Camincher, C; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Bret, M Cano; Cantero, J; Cantrill, R; Cao, T; Garrido, M D M Capeans; Caprini, I; Caprini, M; Capua, M; Caputo, R; Carbone, R M; Cardarelli, R; Cardillo, F; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Casper, D W; Castaneda-Miranda, E; Castelli, A; Gimenez, V Castillo; Castro, N F; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavallaro, E; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Alberich, L Cerda; Cerio, B C; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, S K; Chan, Y L; Chang, P; Chapman, J D; Charlton, D G; Chatterjee, A; Chau, C C; Barajas, C A Chavez; Che, S; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, S; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, H J; Cheng, Y; Cheplakov, A; Cheremushkina, E; Moursli, R Cherkaoui El; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiarelli, G; Chiodini, G; Chisholm, A S; Chitan, A; Chizhov, M V; Choi, K; Chomont, A R; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Cirotto, F; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Colasurdo, L; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Muiño, P Conde; Coniavitis, E; Connell, S H; Connelly, I A; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Crawley, S J; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Ortuzar, M Crispin; Cristinziani, M; Croft, V; Crosetti, G; Donszelmann, T Cuhadar; Cummings, J; Curatolo, M; Cúth, J; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; De Sousa, M J Da Cunha Sargedas; Via, C Da; Dabrowski, W; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Dann, N S; Danninger, M; Hoffmann, M Dano; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Benedetti, A; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Regie, J B De Vivie; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Denysiuk, D; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Dette, K; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Clemente, W K; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Du, Y; Duarte-Campderros, J; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Yildiz, H Duran; Düren, M; Durglishvili, A; Duschinger, D; Dutta, B; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; Kacimi, M El; Ellajosyula, V; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Ennis, J S; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, F; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farina, C; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Giannelli, M Faucci; Favareto, A; Fawcett, W J; Fayard, L; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Feremenga, L; Martinez, P Fernandez; Perez, S Fernandez; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; de Lima, D E Ferreira; Ferrer, A; Ferrere, D; Ferretti, C; Parodi, A Ferretto; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Flaschel, N; Fleck, I; Fleischmann, P; Fletcher, G T; Fletcher, G; Fletcher, R R M; Flick, T; Floderus, A; Castillo, L R Flores; Flowerdew, M J; Forcolin, G T; Formica, A; Forti, A; Foster, A G; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Frate, M; Fraternali, M; Freeborn, D; Fressard-Batraneanu, S M; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Torregrosa, E Fullana; Fusayasu, T; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gach, G P; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, L G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Walls, F M Garay; García, C; Navarro, J E García; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Bravo, A Gascon; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Gecse, Z; Gee, C N P; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Geng, C; Gentile, S; George, S; Gerbaudo, D; Gershon, A; Ghasemi, S; Ghazlane, H; Ghneimat, M; Giacobbe, B; Giagu, S; Giannetti, P; Gibbard, B; Gibson, S M; Gignac, M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuli, F; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Costa, J Goncalves Pinto Firmino Da; Gonella, L; Gongadze, A; de la Hoz, S González; Parra, G Gonzalez; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goudet, C R; Goujdami, D; Goussiou, A G; Govender, N; Gozani, E; Graber, L; Grabowska-Bold, I; Gradin, P O J; Grafström, P; Gramling, J; Gramstad, E; Grancagnolo, S; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Grefe, C; Gregersen, K; Gregor, I M; Grenier, P; Grevtsov, K; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Groh, S; Grohs, J P; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guan, W; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Guo, Y; Gupta, S; Gustavino, G; Gutierrez, P; Ortiz, N G Gutierrez; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Hadef, A; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamilton, A; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Haney, B; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, J J; Heinrich, L; Heinz, C; Hejbal, J; Helary, L; Hellman, S; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Henkelmann, S; Correia, A M Henriques; Henrot-Versille, S; Herbert, G H; Jiménez, Y Hernández; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Homann, M; Hong, T M; Hooberman, B H; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Ince, T; Introzzi, G; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Quiles, A Irles; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ito, F; Ponce, J M Iturbe; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jain, V; Jakobi, K B; Jakobs, K; Jakobsen, S; Jakoubek, T; Jamin, D O; Jana, D K; Jansen, E; Jansky, R; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanneau, F; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, H; Jiang, Y; Jiggins, S; Pena, J Jimenez; Jin, S; Jinaru, A; Jinnouchi, O; Johansson, P; Johns, K A; Johnson, W J; Jon-And, K; Jones, G; Jones, R W L; Jones, S; Jones, T J; Jongmanns, J; Jorge, P M; Jovicevic, J; Ju, X; Rozas, A Juste; Köhler, M K; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kaluza, A; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kaplan, L S; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karentzos, E; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kasahara, K; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Kato, C; Katre, A; Katzy, J; Kawade, K; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kido, S; Kim, H Y; Kim, S H; Kim, Y K; Kimura, N; Kind, O M; King, B T; King, M; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Kluge, E-E; Kluit, P; Kluth, S; Knapik, J; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohriki, T; Koi, T; Kolanoski, H; Kolb, M; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewska, A B; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuechler, J T; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kukla, R; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; Rosa, A La; Navarro, J L La Rosa; Rotonda, L La; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lammers, S; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Manghi, F Lasagni; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Lazovich, T; Lazzaroni, M; Dortz, O Le; Guirriec, E Le; Menedeu, E Le; Quilleuc, E P Le; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Miotto, G Lehmann; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Lerner, G; Leroy, C; Lesage, A A J; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, Q; Li, S; Li, X; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Lindquist, B E; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, H; Liu, H; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y L; Liu, Y; Livan, M; Lleres, A; Merino, J Llorente; Lloyd, S L; Sterzo, F Lo; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loew, K M; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Longo, L; Looper, K A; Lopes, L; Mateos, D Lopez; Paredes, B Lopez; Paz, I Lopez; Solis, A Lopez; Lorenz, J; Martinez, N Lorenzo; Losada, M; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, H; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luedtke, C; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Lyubushkin, V; Ma, H; Ma, L L; Ma, Y; Maccarrone, G; Macchiolo, A; Macdonald, C M; Maček, B; Miguens, J Machado; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeda, J; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Maneira, J; Andrade Filho, L Manhaes de; Ramos, J Manjarres; Mann, A; Mansoulie, B; Mantifel, R; Mantoani, M; Manzoni, S; Mapelli, L; Marceca, G; March, L; Marchiori, G; Marcisovsky, M; Marjanovic, M; Marley, D E; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Latour, B Martin Dit; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Fadden, N C Mc; Goldrick, G Mc; Kee, S P Mc; McCarn, A; McCarthy, R L; McCarthy, T G; McClymont, L I; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Garcia, B R Mellado; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Theenhausen, H Meyer Zu; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mistry, K P; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Monden, R; Mondragon, M C; Mönig, K; Monk, J; Monnier, E; Montalbano, A; Berlingen, J Montejo; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Llácer, M Moreno; Morettini, P; Mori, D; Mori, T; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morvaj, L; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Mullier, G A; Sanchez, F J Munoz; Quijada, J A Murillo; Murray, W J; Murrone, A; Musheghyan, H; Muskinja, M; Myagkov, A G; Myska, M; Nachman, B P; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagano, K; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Garcia, R F Naranjo; Narayan, R; Villar, D I Narrias; Naryshkin, I; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Norjoharuddeen, N; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nurse, E; Nuti, F; O'grady, F; O'Neil, D C; O'Rourke, A A; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Ochoa-Ricoux, J P; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Seabra, L F Oleiro; Pino, S A Olivares; Damazio, D Oliveira; Olszewski, A; Olszowska, J; Onofre, A; Onogi, K; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Orr, R S; Osculati, B; Ospanov, R; Garzon, G Otero Y; Otono, H; Ouchrif, M; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pages, A Pacheco; Aranda, C Padilla; Pagáčová, M; Griso, S Pagan; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Panagiotopoulou, E St; Pandini, C E; Vazquez, J G Panduro; Pani, P; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Hernandez, D Paredes; Parker, A J; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pascuzzi, V; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Lopez, S Pedraza; Pedro, R; Peleganchuk, S V; Pelikan, D; Penc, O; Peng, C; Peng, H; Penwell, J; Peralva, B S; Perego, M M; Perepelitsa, D V; Codina, E Perez; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petroff, P; Petrolo, E; Petrov, M; Petrucci, F; Pettersson, N E; Peyaud, A; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pilcher, J E; Pilkington, A D; Pin, A W J; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pires, S; Pirumov, H; Pitt, M; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Poley, A; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Astigarraga, M E Pozo; Pralavorio, P; Pranko, A; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopopescu, S; Proudfoot, J; Przybycien, M; Puddu, D; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Raine, J A; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Ratti, M G; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reichert, J; Reisin, H; Rembser, C; Ren, H; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rifki, O; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Rizzi, C; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodina, Y; Perez, A Rodriguez; Rodriguez, D Rodriguez; Roe, S; Rogan, C S; Røhne, O; Romaniouk, A; Romano, M; Saez, S M Romano; Adam, E Romero; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, J H N; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryu, S; Ryzhov, A; Saavedra, A F; Sabato, G; Sacerdoti, S; Sadrozinski, H F-W; Sadykov, R; Tehrani, F Safai; Saha, P; Sahinsoy, M; Saimpert, M; Saito, T; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Loyola, J E Salazar; Salek, D; De Bruin, P H Sales; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sammel, D; Sampsonidis, D; Sanchez, A; Sánchez, J; Martinez, V Sanchez; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Castillo, I Santoyo; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmieden, K; Schmitt, C; Schmitt, S; Schmitz, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schovancova, J; Schramm, S; Schreyer, M; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schweiger, H; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shaikh, N W; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Saadi, D Shoaleh; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidebo, P E; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simon, M; Sinervo, P; Sinev, N B; Sioli, M; Siragusa, G; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Slovak, R; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smith, R W; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Sokhrannyi, G; Sanchez, C A Solans; Solar, M; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Son, H; Song, H Y; Sood, A; Sopczak, A; Sopko, V; Sorin, V; Sosa, D; Sotiropoulou, C L; Soualah, R; Soukharev, A M; South, D; Sowden, B C; Spagnolo, S; Spalla, M; Spangenberg, M; Spanò, F; Sperlich, D; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Denis, R D St; Stabile, A; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, G H; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Suchek, S; Sugaya, Y; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Svatos, M; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Araya, S Tapia; Tapprogge, S; Tarem, S; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Delgado, A Tavares; Tayalati, Y; Taylor, A C; Taylor, G N; Taylor, P T E; Taylor, W; Teischinger, F A; Teixeira-Dias, P; Temming, K K; Temple, D; Kate, H Ten; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Tibbetts, M J; Torres, R E Ticse; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tipton, P; Tisserant, S; Todome, K; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tong, B; Torrence, E; Torres, H; Pastor, E Torró; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Trofymov, A; Troncon, C; Trottier-McDonald, M; Trovatelli, M; Truong, L; Trzebinski, M; Trzupek, A; Tseng, J C-L; Tsiareshka, P V; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsui, K M; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turgeman, D; Turra, R; Turvey, A J; Tuts, P M; Tyndel, M; Ucchielli, G; Ueda, I; Ueno, R; Ughetto, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Santurio, E Valdes; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Vallecorsa, S; Ferrer, J A Valls; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vasquez, J G; Vazeille, F; Schroeder, T Vazquez; Veatch, J; Veloce, L M; Veloso, F; Veneziano, S; Ventura, A; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Boeriu, O E Vickey; Viehhauser, G H A; Viel, S; Vigani, L; Vigne, R; Villa, M; Perez, M Villaplana; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vittori, C; Vivarelli, I; Vlachos, S; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Milosavljevic, M Vranjes; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wallangen, V; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Washbrook, A; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Whallon, N L; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilk, F; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winston, O J; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamaguchi, D; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yang, Z; Yao, W-M; Yap, Y C; Yasu, Y; Yatsenko, E; Wong, K H Yau; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yuen, S P Y; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zakharchuk, N; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zeng, J C; Zeng, Q; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, G; Zhang, H; Zhang, J; Zhang, L; Zhang, R; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, M; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Nedden, M Zur; Zurzolo, G; Zwalinski, L
2016-01-01
A test of CP invariance in Higgs boson production via vector-boson fusion using the method of the Optimal Observable is presented. The analysis exploits the decay mode of the Higgs boson into a pair of [Formula: see text] leptons and is based on 20.3 [Formula: see text] of proton-proton collision data at [Formula: see text] = 8 [Formula: see text] collected by the ATLAS experiment at the LHC. Contributions from CP-violating interactions between the Higgs boson and electroweak gauge bosons are described in an effective field theory framework, in which the strength of CP violation is governed by a single parameter [Formula: see text]. The mean values and distributions of CP-odd observables agree with the expectation in the Standard Model and show no sign of CP violation. The CP-mixing parameter [Formula: see text] is constrained to the interval [Formula: see text] at 68% confidence level, consistent with the Standard Model expectation of [Formula: see text].
Smartphone Assessment of Knee Flexion Compared to Radiographic Standards
Dietz, Matthew J.; Sprando, Daniel; Hanselman, Andrew E.; Regier, Michael D.; Frye, Benjamin M.
2017-01-01
Purpose Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Methods Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. Results The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC = 0.94; 95% CI: 0.91–0.96). Visual estimation was found to be the least reliable method of measurement. Conclusions The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. PMID:28179062
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
Optimal solutions for the evolution of a social obesity epidemic model
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef
2017-06-01
In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
Malavera, Alejandra; Vasquez, Alejandra; Fregni, Felipe
2015-01-01
Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that has been extensively studied. While there have been initial positive results in some clinical trials, there is still variability in tDCS results. The aim of this article is to review and discuss patents assessing novel methods to optimize the use of tDCS. A systematic review was performed using Google patents database with tDCS as the main technique, with patents filling date between 2010 and 2015. Twenty-two patents met our inclusion criteria. These patents attempt to address current tDCS limitations. Only a few of them have been investigated in clinical trials (i.e., high-definition tDCS), and indeed most of them have not been tested before in human trials. Further clinical testing is required to assess which patents are more likely to optimize the effects of tDCS. We discuss the potential optimization of tDCS based on these patents and the current experience with standard tDCS.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Solid phase microextraction Arrow for the sampling of volatile amines in wastewater and atmosphere.
Helin, Aku; Rönkkö, Tuukka; Parshintsev, Jevgeni; Hartonen, Kari; Schilling, Beat; Läubli, Thomas; Riekkola, Marja-Liisa
2015-12-24
A new method is introduced for the sampling of volatile low molecular weight alkylamines in ambient air and wastewater by utilizing a novel SPME Arrow system, which contains a larger volume of sorbent compared to a standard SPME fiber. Parameters affecting the extraction, such as coating material, need for preconcentration, sample volume, pH, stirring rate, salt addition, extraction time and temperature were carefully optimized. In addition, analysis conditions, including desorption temperature and time as well as gas chromatographic parameters, were optimized. Compared to conventional SPME fiber, the SPME Arrow had better robustness and sensitivity. Average intermediate reproducibility of the method expressed as relative standard deviation was 12% for dimethylamine and 14% for trimethylamine, and their limit of quantification 10μg/L and 0.13μg/L respectively. Working range was from limits of quantification to 500μg/L for dimethylamine and to 130μg/L for trimethylamine. Several alkylamines were qualitatively analyzed in real samples, while target compounds dimethyl- and trimethylamines were quantified. The concentrations in influent and effluent wastewater samples were almost the same (∼80μg/L for dimethylamine, 120μg/L for trimethylamine) meaning that amines pass the water purification process unchanged or they are produced at the same rate as they are removed. For the air samples, preconcentration with phosphoric acid coated denuder was required and the concentration of trimethylamine was found to be around 1ng/m(3). The developed method was compared with optimized method based on conventional SPME and advantages and disadvantages of both approaches are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Hamedi, Raheleh; Hadjmohammadi, Mohammad Reza
2017-09-01
A novel design of hollow-fiber liquid-phase microextraction containing multiwalled carbon nanotubes as a solid sorbent, which is immobilized in the pore and lumen of hollow fiber by the sol-gel technique, was developed for the pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. The proposed method utilized both solid- and liquid-phase microextraction media. Parameters that affect the extraction of polycyclic aromatic hydrocarbons were optimized in two successive steps as follows. Firstly, a methodology based on a quarter factorial design was used to choose the significant variables. Then, these significant factors were optimized utilizing central composite design. Under the optimized condition (extraction time = 25 min, amount of multiwalled carbon nanotubes = 78 mg, sample volume = 8 mL, and desorption time = 5 min), the calibration curves showed high linearity (R 2 = 0.99) in the range of 0.01-500 ng/mL and the limits of detection were in the range of 0.007-1.47 ng/mL. The obtained extraction recoveries for 10 ng/mL of polycyclic aromatic hydrocarbons standard solution were in the range of 85-92%. Replicating the experiment under these conditions five times gave relative standard deviations lower than 6%. Finally, the method was successfully applied for pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An
2018-05-01
In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Productivity analysis to overcome the limited availability of production time in SME FBS
NASA Astrophysics Data System (ADS)
Nurhasanah, N.; Jingga; Aribowo, B.; Gayatri, AM; Mardhika, DA; Tanjung, WN; Suri, QA; Safitri, R.; Supriyanto, A.
2017-12-01
Good industrial development should pay attention to the human factor as the main driver. Condition of work procedures, work area, and environment can affect the production result because if not optimal, the production will run slowly. If the work system is less than optimal, the productivity will do so, the operator will work uncomfortably and be easy to undergo work fatigue, even it can cause work accidents. Thus, the optimal and ergonomic arrangement of the the overall work system mechanism and work environment design is required for workers to work well, regularly, safely and comfortably with the aim of improving work productivity. This research measures the performance in textile SME (Small and Medium Enterprise) located in Sukabumi which is SME FBS which produces children’s clothing. This performance measurement is aimed at improving the competitiveness of the textile IKM so that it has the equal competitiveness with other SMEs or with textile industries that already have their name in market. Based on the method of hour standard time and TOC calculation at 2 FBS CMT (Cut-Make-Trim) in Sukabumi, which are the CMT Margaluyu Village and CMT Purabaya Village, the result is that the standard time of shirt work on CMT Margaluyu Village is less than that of CMT Desa Purabaya. It can be seen that more effective in SME FBS production is by process method.
Brown, Roger B; Madrid, Nathaniel J; Suzuki, Hideaki; Ness, Scott A
2017-01-01
RNA-sequencing (RNA-seq) has become the standard method for unbiased analysis of gene expression but also provides access to more complex transcriptome features, including alternative RNA splicing, RNA editing, and even detection of fusion transcripts formed through chromosomal translocations. However, differences in library methods can adversely affect the ability to recover these different types of transcriptome data. For example, some methods have bias for one end of transcripts or rely on low-efficiency steps that limit the complexity of the resulting library, making detection of rare transcripts less likely. We tested several commonly used methods of RNA-seq library preparation and found vast differences in the detection of advanced transcriptome features, such as alternatively spliced isoforms and RNA editing sites. By comparing several different protocols available for the Ion Proton sequencer and by utilizing detailed bioinformatics analysis tools, we were able to develop an optimized random primer based RNA-seq technique that is reliable at uncovering rare transcript isoforms and RNA editing features, as well as fusion reads from oncogenic chromosome rearrangements. The combination of optimized libraries and rapid Ion Proton sequencing provides a powerful platform for the transcriptome analysis of research and clinical samples.
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Metrani, Rita; Jayaprakasha, G K; Patil, Bhimanagouda S
2018-03-01
The present study describes the rapid microplate method to determine pyruvic acid content in different varieties of onions. Onion juice was treated with 2,4-dinitrophenylhydrazine to obtain hydrazone, which was further treated with potassium hydroxide to get stable colored complex. The stability of potassium complex was enhanced up to two hours and the structures of hydrazones were confirmed by LC-MS for the first time. The developed method was optimized by testing different bases, acids with varying concentrations of dinitrophenyl hydrazine to get stable color and results were comparable to developed method. Repeatability and precision showed <9% relative standard deviation. Moreover, sweet onion juice was stored for four weeks at different temperatures for the stability; the pyruvate remained stable at all temperatures except at 25°C. Thus, the developed method has good potential to determine of pungency in large number of onions in a short time using minimal amount of reagents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing
2013-02-20
A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
FFT-enhanced IHS transform method for fusing high-resolution satellite images
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2007-01-01
Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
Whinnett, Zachary I; Sohaib, S M Afzal; Jones, Siana; Kyriacou, Andreas; March, Katherine; Coady, Emma; Mayet, Jamil; Hughes, Alun D; Frenneaux, Michael; Francis, Darrel P
2014-04-03
Echocardiographic optimization of pacemaker settings is the current standard of care for patients treated with cardiac resynchronization therapy. However, the process requires considerable time of expert staff. The BRAVO study is a non-inferiority trial comparing echocardiographic optimization of atrioventricular (AV) and interventricular (VV) delay with an alternative method using non-invasive blood pressure monitoring that can be automated to consume less staff resources. BRAVO is a multi-centre, randomized, cross-over, non-inferiority trial of 400 patients with a previously implanted cardiac resynchronization device. Patients are randomly allocated to six months in each arm. In the echocardiographic arm, AV delay is optimized using the iterative method and VV delay by maximizing LVOT VTI. In the haemodynamic arm AV and VV delay are optimized using non-invasive blood pressure measured using finger photoplethysmography. At the end of each six month arm, patients undergo the primary outcome measure of objective exercise capacity, quantified as peak oxygen uptake (VO2) on a cardiopulmonary exercise test. Secondary outcome measures are echocardiographic measurement of left ventricular remodelling, quality of life score and N-terminal pro B-type Natriuretic Peptide (NT-pro BNP). The study is scheduled to complete recruitment in December 2013 and to complete follow up in December 2014. If exercise capacity is non-inferior with haemodynamic optimization compared with echocardiographic optimization, it would be proof of concept that haemodynamic optimization is an acceptable alternative which has the potential to be more easily implemented. Clinicaltrials.gov NCT01258829.
A global optimization algorithm inspired in the behavior of selfish herds.
Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián
2017-10-01
In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Global optimization of minority game by intelligent agents
NASA Astrophysics Data System (ADS)
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao
2005-10-01
We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Development of duplex real-time PCR for the detection of WSSV and PstDV1 in cultivated shrimp.
Leal, Carlos A G; Carvalho, Alex F; Leite, Rômulo C; Figueiredo, Henrique C P
2014-07-05
The White spot syndrome virus (WSSV) and Penaeus stylirostris penstyldensovirus 1 (previously named Infectious hypodermal and hematopoietic necrosis virus-IHHNV) are two of the most important viral pathogens of penaeid shrimp. Different methods have been applied for diagnosis of these viruses, including Real-time PCR (qPCR) assays. A duplex qPCR method allows the simultaneous detection of two viruses in the same sample, which is more cost-effective than assaying for each virus separately. Currently, an assay for the simultaneous detection of the WSSV and the PstDV1 in shrimp is unavailable. The aim of this study was to develop and standardize a duplex qPCR assay for the simultaneous detection of the WSSV and the PstDV1 in clinical samples of diseased L. vannamei. In addition, to evaluate the performance of two qPCR master mixes with regard to the clinical sensitivity of the qPCR assay, as well as, different methods for qPCR results evaluation. The duplex qPCR assay for detecting WSSV and PstDV1 in clinical samples was successfully standardized. No difference in the amplification of the standard curves was observed between the duplex and singleplex assays. Specificities and sensitivities similar to those of the singleplex assays were obtained using the optimized duplex qPCR. The analytical sensitivities of duplex qPCR were two copies of WSSV control plasmid and 20 copies of PstDV1 control plasmid. The standardized duplex qPCR confirmed the presence of viral DNA in 28 from 43 samples tested. There was no difference for WSSV detection using the two kits and the distinct methods for qPCR results evaluation. High clinical sensitivity for PstDV1 was obtained with TaqMan Universal Master Mix associated with relative threshold evaluation. Three cases of simultaneous infection by the WSSV and the PstDV1 were identified with duplex qPCR. The standardized duplex qPCR was shown to be a robust, highly sensitive, and feasible diagnostic tool for the simultaneous detection of the WSSV and the PstDV1 in whiteleg shrimp. The use of the TaqMan Universal Master Mix and the relative threshold method of data analysis in our duplex qPCR method provided optimal levels of sensitivity and specificity.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274
Optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps
NASA Astrophysics Data System (ADS)
Qiu, Hong; Deng, Wenmin
2018-02-01
In this paper, the optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps is considered. We introduce two kinds of environmental perturbations in this model. One is called white noise which is continuous and is described by a stochastic integral with respect to the standard Brownian motion. And the other one is jumping noise which is modeled by a Lévy process. Under some mild assumptions, the critical values between extinction and persistent in the mean of each species are established. The sufficient and necessary criteria for the existence of optimal harvesting policy are established and the optimal harvesting effort and the maximum of sustainable yield are also obtained. We utilize the ergodic method to discuss the optimal harvesting problem. The results show that white noises and Lévy noises significantly affect the optimal harvesting policy while time delays is harmless for the optimal harvesting strategy in some cases. At last, some numerical examples are introduced to show the validity of our results.
Qualitative and quantitative measurement of cannabinoids in cannabis using modified HPLC/DAD method.
Patel, Bhupendra; Wene, Daniel; Fan, Zhihua Tina
2017-11-30
This study presents an accurate and high throughput method for the quantitative determination of various cannabinoids in cannabis plant material using high pressure liquid chromatography (HPLC) with a diode array detector (DAD). Sample extraction and chromatographic analysis conditions for the measurement of cannabinoids in the complex cannabis plant material matrix were optimized. The Agilent Poroshell 120 SB-C18 column provided high resolution for all target analytes with a short run time (10minutes) given the core shell technology. The aqueous buffer mobile phase was optimized with ammonium acetate at pH 4.75. The change in the mobile phase and the new column ensured a separation between cannabidiol (CBD and cannabigerol (CBG) along with cannabigerol and tetrahydrocannabinolic acid (THCA), which were not well separated by previous publications, improved buffering capacity, and provided analytical performance stability. Moreover, baseline drifting was significantly minimized by the use of a low concentration buffer solution (25mM ammonium acetate). In addition, evaporation and reconstitution of the sample residue with a methanol-organic pure (OP) water solution (65:35) significantly reduced the matrix interference. The modified extraction produced good recoveries (>91%) for each of the eight cannabinoids. The optimized method was validated for specificity, linearity, sensitivity, precision, accuracy, and stability. The combined relative standard deviation (%RSD) for intra-day and inter-day precision for all eight analytes varied from 2.5% to 5.2% and 0.28% to 5.5%, respectively. The %RSD for the repeatability study varied from 1.1% to 5.5%. The recoveries from spiked cannabis matrix samples were greater than 90% for all analytes, except delta-8-tetrahydrocannabinol (Δ 8 -THC), which was 80%. The recoveries varied from 81% to 107% with a precision of 0.7-8.1%RSD. Delta-9-tetrahydrocannabinol (Δ 9 -THC) in all of the cannabis samples (n=635) was less than 10%, which is in compliance with the NJ Medicinal Marijuana regulation. Analysis of samples from two cultivars, which included ten individual samples, four composite samples, seven calibration standards, and four quality control standards, can be performed within 24hours by this high throughput method. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Medvedeva, Maria F.; Doubrovski, Valery A.
2017-03-01
The resolution of the acousto-optical method for blood typing was estimated experimentally by means of two types of reagents: monoclonal antibodies and standard hemagglutinating sera. The peculiarity of this work is the application of digital photo images processing by pixel analysis previously proposed by the authors. The influence of the concentrations of reagents, of blood sample, which is to be tested, as well as of the duration of the ultrasonic action on the biological object upon the resolution of acousto-optical method were investigated. The optimal experimental conditions to obtain maximum of the resolution of the acousto-optical method were found, it creates the prerequisites for a reliable blood typing. The present paper is a further step in the development of acousto-optical method for determining human blood groups.
Place, Benjamin J
2017-05-01
To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.
Chavez, Pierre-François; Meeus, Joke; Robin, Florent; Schubert, Martin Alexander; Somville, Pascal
2018-01-01
The evaluation of drug–polymer miscibility in the early phase of drug development is essential to ensure successful amorphous solid dispersion (ASD) manufacturing. This work investigates the comparison of thermodynamic models, conventional experimental screening methods (solvent casting, quench cooling), and a novel atomization screening device based on their ability to predict drug–polymer miscibility, solid state properties (Tg value and width), and adequate polymer selection during the development of spray-dried amorphous solid dispersions (SDASDs). Binary ASDs of four drugs and seven polymers were produced at 20:80, 40:60, 60:40, and 80:20 (w/w). Samples were systematically analyzed using modulated differential scanning calorimetry (mDSC) and X-ray powder diffraction (XRPD). Principal component analysis (PCA) was used to qualitatively assess the predictability of screening methods with regards to SDASD development. Poor correlation was found between theoretical models and experimentally-obtained results. Additionally, the limited ability of usual screening methods to predict the miscibility of SDASDs did not guarantee the appropriate selection of lead excipient for the manufacturing of robust SDASDs. Contrary to standard approaches, our novel screening device allowed the selection of optimal polymer and drug loading and established insight into the final properties and performance of SDASDs at an early stage, therefore enabling the optimization of the scaled-up late-stage development. PMID:29518936
Mazumder, Avik; Gupta, Hemendra K; Garg, Prabhat; Jain, Rajeev; Dubey, Devendra K
2009-07-03
This paper details an on-flow liquid chromatography-ultraviolet-nuclear magnetic resonance (LC-UV-NMR) method for the retrospective detection and identification of alkyl alkylphosphonic acids (AAPAs) and alkylphosphonic acids (APAs), the markers of the toxic nerve agents for verification of the Chemical Weapons Convention (CWC). Initially, the LC-UV-NMR parameters were optimized for benzyl derivatives of the APAs and AAPAs. The optimized parameters include stationary phase C(18), mobile phase methanol:water 78:22 (v/v), UV detection at 268nm and (1)H NMR acquisition conditions. The protocol described herein allowed the detection of analytes through acquisition of high quality NMR spectra from the aqueous solution of the APAs and AAPAs with high concentrations of interfering background chemicals which have been removed by preceding sample preparation. The reported standard deviation for the quantification is related to the UV detector which showed relative standard deviations (RSDs) for quantification within +/-1.1%, while lower limit of detection upto 16mug (in mug absolute) for the NMR detector. Finally the developed LC-UV-NMR method was applied to identify the APAs and AAPAs in real water samples, consequent to solid phase extraction and derivatization. The method is fast (total experiment time approximately 2h), sensitive, rugged and efficient.
Schropp, Lars; Stavropoulos, Andreas; Spin-Neto, Rubens; Wenzel, Ann
2012-01-01
To compare a customized imaging guide and a standard film holder for obtaining optimally projected intraoral radiographs of dental implants. Intraoral radiographs of four screw-type implants with different inclination placed in an upper or lower dental phantom model were recorded by 32 groups of examiners after a short instruction in the use of the RB-RB/LB-LB mnemonic rule. Half of the examiners recorded the images using a standard film holder and the other half used a customized imaging guide. Each radiograph was assessed under blinded conditions with regard to rendering of the implant threads and was assigned to one of four quality categories: (1) perfect, (2) not perfect, but clinically acceptable, (3) not acceptable, and (4) hopeless. For the upper jaw, the same number of exposures per implant were made to achieve an acceptable image (P=0.86) by the standard film holder method (median=2) and the imaging guide method (median=2). For the lower jaw, medians for the imaging guide method and the film holder method were 1 and 2, respectively (P=0.004). For the imaging guide method, the first exposure was rated as perfect/acceptable in 62% of the cases and for the film holder method in 41% of the cases (P=0.013). After ≤ 2 exposures, 78% (imaging guide method) and 69% (film holder method) of the implant images were perfect/acceptable (P=0.23). The implant inclination did not have a major influence on the outcomes. Perfect or acceptable images were achieved after two exposures with the same frequency either using a customized imaging guide method or a standard film holder method. However, the use of a customized imaging guide method was overall significantly superior to a standard film holder method in terms of obtaining perfect or acceptable images with only one exposure. © 2011 John Wiley & Sons A/S.
DoE optimization of a mercury isotope ratio determination method for environmental studies.
Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico
2016-05-15
By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong
2015-08-01
Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.
A stochastic visco-hyperelastic model of human placenta tissue for finite element crash simulations.
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Rupp, Jonathan D; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W
2011-03-01
Placental abruption is the most common cause of fetal deaths in motor-vehicle crashes, but studies on the mechanical properties of human placenta are rare. This study presents a new method of developing a stochastic visco-hyperelastic material model of human placenta tissue using a combination of uniaxial tensile testing, specimen-specific finite element (FE) modeling, and stochastic optimization techniques. In our previous study, uniaxial tensile tests of 21 placenta specimens have been performed using a strain rate of 12/s. In this study, additional uniaxial tensile tests were performed using strain rates of 1/s and 0.1/s on 25 placenta specimens. Response corridors for the three loading rates were developed based on the normalized data achieved by test reconstructions of each specimen using specimen-specific FE models. Material parameters of a visco-hyperelastic model and their associated standard deviations were tuned to match both the means and standard deviations of all three response corridors using a stochastic optimization method. The results show a very good agreement between the tested and simulated response corridors, indicating that stochastic analysis can improve estimation of variability in material model parameters. The proposed method can be applied to develop stochastic material models of other biological soft tissues.
Interdisciplinary Distinguished Seminar Series
2014-08-29
official Department of the Army position, policy or decision, unless so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Received Book TOTAL: Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under...capabilities, estimation and optimization techniques, image and color standards, efficient programming methods and efficient ASIC designs . This seminar will
Optimal solutions for a bio mathematical model for the evolution of smoking habit
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef
In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Operating wind turbines in strong wind conditions by using feedforward-feedback control
NASA Astrophysics Data System (ADS)
Feng, Ju; Sheng, Wen Zhong
2014-12-01
Due to the increasing penetration of wind energy into power systems, it becomes critical to reduce the impact of wind energy on the stability and reliability of the overall power system. In precedent works, Shen and his co-workers developed a re-designed operation schema to run wind turbines in strong wind conditions based on optimization method and standard PI feedback control, which can prevent the typical shutdowns of wind turbines when reaching the cut-out wind speed. In this paper, a new control strategy combing the standard PI feedback control with feedforward controls using the optimization results is investigated for the operation of variable-speed pitch-regulated wind turbines in strong wind conditions. It is shown that the developed control strategy is capable of smoothening the power output of wind turbine and avoiding its sudden showdown at high wind speeds without worsening the loads on rotor and blades.
NASA Astrophysics Data System (ADS)
Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni
2018-04-01
With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
El Ati-Hellal, Myriam; Hellal, Fayçal; Hedhili, Abderrazek
2014-10-01
The aim of this study was the optimization of selenium determination in plasma samples with electrothermal atomic absorption spectrometry using experimental design methodology. 11 variables being able to influence selenium analysis in human blood plasma by electrothermal atomic absorption spectrometry (ETAAS) were evaluated with Plackett-Burman experimental design. These factors were selected from sample preparation, furnace program and chemical modification steps. Both absorbance and background signals were chosen as responses in the screening approach. Doehlert design was used for method optimization. Results showed that only ashing temperature has a statistically significant effect on the selected responses. Optimization with Doehlert design allowed the development of a reliable method for selenium analysis with ETAAS. Samples were diluted 1/10 with 0.05% (v/v) TritonX-100+2.5% (v/v) HNO3 solution. Optimized ashing and atomization temperatures for nickel modifier were 1070°C and 2270°C, respectively. A detection limit of 2.1μgL(-1) Se was obtained. Accuracy of the method was checked by the analysis of selenium in Seronorm™ Trace element quality control serum level 1. The developed procedure was applied for the analysis of total selenium in fifteen plasma samples with standard addition method. Concentrations ranged between 24.4 and 64.6μgL(-1), with a mean of 42.6±4.9μgL(-1). The use of experimental designs allowed the development of a cheap and accurate method for selenium analysis in plasma that could be applied routinely in clinical laboratories. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Martín, Julia; Rodríguez-Gómez, Rocío; Zafra-Gómez, Alberto; Alonso, Esteban; Vílchez, José L; Navalón, Alberto
2016-04-01
A new method for the determination of four perfluoroalkyl carboxylic acids (from C5 to C8) and perfluorooctane sulfonate in human milk samples using stir-bar sorptive extraction-ultra-HPLC-MS/MS has been accurately optimized and validated. Polydimethylsiloxane and polyethyleneglycol modified silicone materials were evaluated. Overall, polyethyleneglycol led to a better sensitivity. After optimizing experimental variables, the method was validated reaching detection limits in the range of 0.05-0.20 ng ml(-1); recovery rates from 81 to 105% and relative standard deviations fewer than 13% in all cases. The method was applied to milk samples from five randomly selected women. All samples were positive for at least one of the target compounds with concentrations ranging between 0.8 and 6.6 ng ml(-1), being the most abundant perfluorooctane sulfonate.
Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity
Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.
2016-01-01
Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557
Numerical noise prediction in fluid machinery
NASA Astrophysics Data System (ADS)
Pantle, Iris; Magagnato, Franco; Gabi, Martin
2005-09-01
Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.
NASA Astrophysics Data System (ADS)
Yu, Fei; Wu, Yongjun; Yu, Songcheng; Zhang, Huili; Zhang, Hongquan; Qu, Lingbo; Harrington, Peter de B.
With alkaline phosphatase (ALP)-adamantane (AMPPD) system as the chemiluminescence (CL) detection system, a highly sensitive, specific and simple competitive chemiluminescence enzyme immunoassay (CLEIA) was developed for the measurement of enrofloxacin (ENR). The physicochemical parameters, such as the chemiluminescent assay mediums, the dilution buffer of ENR-McAb, the volume of dilution buffer, the monoclonal antibody concentration, the incubation time, and other relevant variables of the immunoassay have been optimized. Under the optimal conditions, the detection linear range of 350-1000 pg/mL and the detection limit of 0.24 ng/mL were provided by the proposed method. The relative standard deviations were less than 15% for both intra and inter-assay precision. This method has been successfully applied to determine ENR in spiked samples with the recovery of 103%-96%. It showed that CLEIA was a good potential method in the analysis of residues of veterinary drugs after treatment of related diseases.
Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.
Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai
2015-01-01
The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Eriksen, Jane N; Madsen, Pia L; Dragsted, Lars O; Arrigoni, Eva
2017-02-01
An improved UHPLC-DAD-based method was developed and validated for quantification of major carotenoids present in spinach, serum, chylomicrons, and feces. Separation was achieved with gradient elution within 12.5 min for six dietary carotenoids and the internal standard, echinenone. The proposed method provides, for all standard components, resolution > 1.1, linearity covering the target range (R > 0.99), LOQ < 0.035 mg/L, and intraday and interday RSDs < 2 and 10%, respectively. Suitability of the method was tested on biological matrices. Method precision (RSD%) for carotenoid quantification in serum, chylomicrons, and feces was below 10% for intra- and interday analysis, except for lycopene. Method accuracy was consistent with mean recoveries ranging from 78.8 to 96.9% and from 57.2 to 96.9% for all carotenoids, except for lycopene, in serum and feces, respectively. Additionally, an interlaboratory validation study on spinach at two institutions showed no significant differences in lutein or β-carotene content, when evaluated on four occasions.
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
NASA Astrophysics Data System (ADS)
Ctvrtnickova, T.; Mateo, M. P.; Yañez, A.; Nicolas, G.
2011-04-01
Presented work brings results of Laser-Induced Breakdown Spectroscopy (LIBS) and Thermo-Mechanical Analysis (TMA) of coals and coal blends used in coal fired power plants all over Spain. Several coal specimens, its blends and corresponding laboratory ash were analyzed by mentioned techniques and results were compared to standard laboratory methods. The indices of slagging, which predict the tendency of coal ash deposition on the boiler walls, were determined by means of standard chemical analysis, LIBS and TMA. The optimal coal suitable to be blended with the problematic national lignite coal was suggested in order to diminish the slagging problems. Used techniques were evaluated based on the precision, acquisition time, extension and quality of information they could provide. Finally, the applicability of LIBS and TMA to the successful calculation of slagging indices is discussed and their substitution of time-consuming and instrumentally difficult standard methods is considered.
NASA Astrophysics Data System (ADS)
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
Kashuba, Corinna M; Benson, James D; Critser, John K
2014-04-01
The post-thaw recovery of mouse embryonic stem cells (mESCs) is often assumed to be adequate with current methods. However as this publication will show, this recovery of viable cells actually varies significantly by genetic background. Therefore there is a need to improve the efficiency and reduce the variability of current mESC cryopreservation methods. To address this need, we employed the principles of fundamental cryobiology to improve the cryopreservation protocol of four mESC lines from different genetic backgrounds (BALB/c, CBA, FVB, and 129R1 mESCs) through a comparative study characterizing the membrane permeability characteristics and membrane integrity osmotic tolerance limits of each cell line. In the companion paper, these values were used to predict optimal cryoprotectants, cooling rates, warming rates, and plunge temperatures, and then these predicted optimal protocols were validated against standard freezing protocols. Copyright © 2014 Elsevier Inc. All rights reserved.
Data Mining of Macromolecular Structures.
van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P
2016-01-01
The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.
NASA Astrophysics Data System (ADS)
Addison, Paul S.; Watson, James N.
2004-11-01
We present a novel time-frequency method for the measurement of oxygen saturation using the photoplethysmogram (PPG) signals from a standard pulse oximeter machine. The method utilizes the time-frequency transformation of the red and infrared PPGs to derive a 3D Lissajous figure. By selecting the optimal Lissajous, the method provides an inherently robust basis for the determination of oxygen saturation as regions of the time-frequency plane where high- and low-frequency signal artefacts are to be found are automatically avoided.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Spectral optimized asymmetric segmented phase-only correlation filter.
Leonard, I; Alfalou, A; Brosseau, C
2012-05-10
We suggest a new type of optimized composite filter, i.e., the asymmetric segmented phase-only filter (ASPOF), for improving the effectiveness of a VanderLugt correlator (VLC) when used for face identification. Basically, it consists in merging several reference images after application of a specific spectral optimization method. After segmentation of the spectral filter plane to several areas, each area is assigned to a single winner reference according to a new optimized criterion. The point of the paper is to show that this method offers a significant performance improvement on standard composite filters for face identification. We first briefly revisit composite filters [adapted, phase-only, inverse, compromise optimal, segmented, minimum average correlation energy, optimal trade-off maximum average correlation, and amplitude-modulated phase-only (AMPOF)], which are tools of choice for face recognition based on correlation techniques, and compare their performances with those of the ASPOF. We illustrate some of the drawbacks of current filters for several binary and grayscale image identifications. Next, we describe the optimization steps and introduce the ASPOF that can overcome these technical issues to improve the quality and the reliability of the correlation-based decision. We derive performance measures, i.e., PCE values and receiver operating characteristic curves, to confirm consistency of the results. We numerically find that this filter increases the recognition rate and decreases the false alarm rate. The results show that the discrimination of the ASPOF is comparable to that of the AMPOF, but the ASPOF is more robust than the trade-off maximum average correlation height against rotation and various types of noise sources. Our method has several features that make it amenable to experimental implementation using a VLC.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
Wang, J; Duan, Y F; Pang, X H; Jiang, S; Yin, S A; Yang, Z Y; Lai, J Q
2018-01-06
Objective: To analyze the status of gestational weight gain (GWG) among Chinese mothers who gave singleton and full-term births, and to look at optimal GWG ranges. Methods: In 2013, using the multi-stage stratified and population proportional cluster sampling method, we investigated 8 323 mother-child pairs at their 0-24 months postpartum from 55 counties (cities/districts) of 30 provinces (except Tibet) in mainland China. Questionnaire was used to collect data on body weight before pregnancy and delivery, diseases during gestation, hemorrhage or not at postpartum, child birth weight and length, and other information about pregnant outcomes. We measured mother's body weight and height, and child's body weight and length. Based on 'Chinese Adult Body Weight Standard', we divided mothers into four groups according to their body weight before pregnancy: low weight (BMI<18.5 kg/m(2)), normal weight (BMI 18.5-23.9 kg/m(2)), overweight (BMI 24.0-27.9 kg/m(2)) and obesity (BMI≥28.0 kg/m(2)). The status of GWG was assessed by IOM optimal GWG guidelines. Chinese optimal GWG ranges were calculated according to the association of GWG with pregnant outcomes and anthropometry of mothers and children, and according to P25-P75 of GWG among mothers who had good pregnant outcomes and good anthropometry, and whose children had good anthropometry. The status of GWG was assessed by the new optimal ranges. Results: P50 (P25-P75) of GWG among the 8 323 mothers was 15.0 (10.0-19.0) kg. According to the proposed optimal GWG ranges of IOM, the proportions of inadequate, optimal and excessive GWG accounted for 27.2% (2 263 mothers), 36.2% (3 016 mothers) and 36.6% (3 044 mothers). The optimal GWG ranges for low weight, normal weight, overweight and obesity were 11.5-18.0, 10.0-15.0, 8.0-14.0 and 5.0-11.5 kg. Based on these optimal GWG ranges established in this study, the rates of inadequate, optimal and excessive GWG were 15.7% (1 303 mothers), 45.0% (3 744 mothers) and 39.3% (3 276 mothers), and these rates were significantly different from that defined by the IOM standards (χ2=345.36, P<0.001). Conclusion: The median of GWG among Chinese mothers is 15.0 kg, which is at a relatively higher level. This study suggests the optimal GWG ranges for Chinese women who give singleton and full-term babies, which appears lower than IOM's.
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Selvi, Emine Kılıçkaya; Şahin, Uğur; Şahan, Serkan
2017-01-01
This method was developed for the determination of trace amounts of aluminum(III) in dialysis concentrates using atomic absorption spectrometry after coprecipitation with lanthanum phosphate. The analytical parameters that influenced the quantitative coprecipitation of analyte including amount of lanthanum, amount of phosfate, pH and duration time were optimized. The % recoveries of the analyte ion were in the range of 95-105 % with limit of detection (3s) of 0.5 µg l -1 . Preconcentration factor was found as 1000 and Relative Standard Deviation (RSD) % value obtained from model solutions was 2.5% for 0.02 mg L -1 . The accuracy of the method was evaluated with standard reference material (CWW-TMD Waste Water). The method was also applied to most concentrated acidic and basic dialysis concentrates with satisfactory results.
Sampling methods for the study of pneumococcal carriage: a systematic review.
Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C
2012-11-06
Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore further in depth comparisons would be of value. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Rockhill, J; Phillips, M
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less
Aerosol delivery and humidification with the Boussignac continuous positive airway pressure device.
Thille, Arnaud W; Bertholon, Jean-François; Becquemin, Marie-Hélène; Roy, Monique; Lyazidi, Aissam; Lellouche, François; Pertusini, Esther; Boussignac, Georges; Maître, Bernard; Brochard, Laurent
2011-10-01
A simple method for effective bronchodilator aerosol delivery while administering continuing continuous positive airway pressure (CPAP) would be useful in patients with severe bronchial obstruction. To assess the effectiveness of bronchodilator aerosol delivery during CPAP generated by the Boussignac CPAP system and its optimal humidification system. First we assessed the relationship between flow and pressure generated in the mask with the Boussignac CPAP system. Next we measured the inspired-gas humidity during CPAP, with several humidification strategies, in 9 healthy volunteers. We then measured the bronchodilator aerosol particle size during CPAP, with and without heat-and-moisture exchanger, in a bench study. Finally, in 7 patients with acute respiratory failure and airway obstruction, we measured work of breathing and gas exchange after a β(2)-agonist bronchodilator aerosol (terbutaline) delivered during CPAP or via standard nebulization. Optimal humidity was obtained only with the heat-and-moisture exchanger or heated humidifier. The heat-and-moisture exchanger had no influence on bronchodilator aerosol particle size. Work of breathing decreased similarly after bronchodilator via either standard nebulization or CPAP, but P(aO(2)) increased significantly only after CPAP aerosol delivery. CPAP bronchodilator delivery decreases the work of breathing as effectively as does standard nebulization, but produces a greater oxygenation improvement in patients with airway obstruction. To optimize airway humidification, a heat-and-moisture exchanger could be used with the Boussignac CPAP system, without modifying aerosol delivery.
Best practice in forensic entomology--standards and guidelines.
Amendt, Jens; Campobasso, Carlo P; Gaudry, Emmanuel; Reiter, Christian; LeBlanc, Hélène N; Hall, Martin J R
2007-03-01
Forensic entomology, the use of insects and other arthropods in forensic investigations, is becoming increasingly more important in such investigations. To ensure its optimal use by a diverse group of professionals including pathologists, entomologists and police officers, a common frame of guidelines and standards is essential. Therefore, the European Association for Forensic Entomology has developed a protocol document for best practice in forensic entomology, which includes an overview of equipment used for collection of entomological evidence and a detailed description of the methods applied. Together with the definitions of key terms and a short introduction to the most important methods for the estimation of the minimum postmortem interval, the present paper aims to encourage a high level of competency in the field of forensic entomology.
Implementation of the force decomposition machine for molecular dynamics simulations.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2012-09-01
We present the design and implementation of the force decomposition machine (FDM), a cluster of personal computers (PCs) that is tailored to running molecular dynamics (MD) simulations using the distributed diagonal force decomposition (DDFD) parallelization method. The cluster interconnect architecture is optimized for the communication pattern of the DDFD method. Our implementation of the FDM relies on standard commodity components even for networking. Although the cluster is meant for DDFD MD simulations, it remains general enough for other parallel computations. An analysis of several MD simulation runs on both the FDM and a standard PC cluster demonstrates that the FDM's interconnect architecture provides a greater performance compared to a more general cluster interconnect. Copyright © 2012 Elsevier Inc. All rights reserved.
Monjure, C J; Tatum, C D; Panganiban, A T; Arainga, M; Traina-Dorge, V; Marx, P A; Didier, E S
2014-02-01
Quantification of plasma viral load (PVL) is used to monitor disease progression in SIV-infected macaques. This study was aimed at optimizing of performance characteristics of the quantitative PCR (qPCR) PVL assay. The PVL quantification procedure was optimized by inclusion of an exogenous control hepatitis C virus armored RNA (aRNA), a plasma concentration step, extended digestion with proteinase K, and a second RNA elution step. Efficiency of viral RNA (vRNA) extraction was compared using several commercial vRNA extraction kits. Various parameters of qPCR targeting the gag region of SIVmac239, SIVsmE660, and the LTR region of SIVagmSAB were also optimized. Modifications of the SIV PVL qPCR procedure increased vRNA recovery, reduced inhibition and improved analytical sensitivity. The PVL values determined by this SIV PVL qPCR correlated with quantification results of SIV RNA in the same samples using the 'industry standard' method of branched-DNA (bDNA) signal amplification. Quantification of SIV genomic RNA in plasma of rhesus macaques using this optimized SIV PVL qPCR is equivalent to the bDNA signal amplification method, less costly and more versatile. Use of heterologous aRNA as an internal control is useful for optimizing performance characteristics of PVL qPCRs. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk
2015-06-01
The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.
Resonator reset in circuit QED by optimal control for large open quantum systems
NASA Astrophysics Data System (ADS)
Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Ferris, Andrew J.; Blais, Alexandre
2017-10-01
We study an implementation of the open GRAPE (gradient ascent pulse engineering) algorithm well suited for large open quantum systems. While typical implementations of optimal control algorithms for open quantum systems rely on explicit matrix exponential calculations, our implementation avoids these operations, leading to a polynomial speedup of the open GRAPE algorithm in cases of interest. This speedup, as well as the reduced memory requirements of our implementation, are illustrated by comparison to a standard implementation of open GRAPE. As a practical example, we apply this open-system optimization method to active reset of a readout resonator in circuit QED. In this problem, the shape of a microwave pulse is optimized such as to empty the cavity from measurement photons as fast as possible. Using our open GRAPE implementation, we obtain pulse shapes, leading to a reset time over 4 times faster than passive reset.
A novel adaptive Cuckoo search for optimal query plan generation.
Gomathi, Ramalingam; Sharmila, Dhandapani
2014-01-01
The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Frequency optimization in the eddy current test for high purity niobium
NASA Astrophysics Data System (ADS)
Joung, Mijoung; Jung, Yoochul; Kim, Hyungjin
2017-01-01
The eddy current test (ECT) is frequently used as a non-destructive method to check for the defects of high purity niobium (RRR300, Residual Resistivity Ratio) in a superconducting radio frequency (SRF) cavity. Determining an optimal frequency corresponding to specific material properties and probe specification is a very important step. The ECT experiments for high purity Nb were performed to determine the optimal frequency using the standard sample of high purity Nb having artificial defects. The target depth was considered with the treatment step that the niobium receives as the SRF cavity material. The results were analysed via the selectivity that led to a specific result, depending on the size of the defects. According to the results, the optimal frequency was determined to be 200 kHz, and a few features of the ECT for the high purity Nb were observed.
Application of da Vinci(®) Robot in simple or radical hysterectomy: Tips and tricks.
Iavazzo, Christos; Gkegkes, Ioannis D
2016-01-01
The first robotic simple hysterectomy was performed more than 10 years ago. These days, robotic-assisted hysterectomy is accepted as an alternative surgical approach and is applied both in benign and malignant surgical entities. The two important points that should be taken into account to optimize postoperative outcomes in the early period of a surgeon's training are how to achieve optimal oncological and functional results. Overcoming any technical challenge, as with any innovative surgical method, leads to an improved surgical operation timewise as well as for patients' safety. The standardization of the technique and recognition of critical anatomical landmarks are essential for optimal oncological and clinical outcomes on both simple and radical robotic-assisted hysterectomy. Based on our experience, our intention is to present user-friendly tips and tricks to optimize the application of a da Vinci® robot in simple or radical hysterectomies.
Gao, Xiaoli; Zhang, Qibin; Meng, Da; Issac, Giorgis; Zhao, Rui; Fillmore, Thomas L.; Chu, Rosey K.; Zhou, Jianying; Tang, Keqi; Hu, Zeping; Moore, Ronald J.; Smith, Richard D.; Katze, Michael G.; Metz, Thomas O.
2012-01-01
Lipidomics is a critical part of metabolomics and aims to study all the lipids within a living system. We present here the development and evaluation of a sensitive capillary UPLC-MS method for comprehensive top-down/bottom-up lipid profiling. Three different stationary phases were evaluated in terms of peak capacity, linearity, reproducibility, and limit of quantification (LOQ) using a mixture of lipid standards representative of the lipidome. The relative standard deviations of the retention times and peak abundances of the lipid standards were 0.29% and 7.7%, respectively, when using the optimized method. The linearity was acceptable at >0.99 over 3 orders of magnitude, and the LOQs were sub-fmol. To demonstrate the performance of the method in the analysis of complex samples, we analyzed lipids extracted from a human cell line, rat plasma, and a model human skin tissue, identifying 446, 444, and 370 unique lipids, respectively. Overall, the method provided either higher coverage of the lipidome, greater measurement sensitivity, or both, when compared to other approaches of global, untargeted lipid profiling based on chromatography coupled with MS. PMID:22354571
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
An efficient auto TPT stitch guidance generation for optimized standard cell design
NASA Astrophysics Data System (ADS)
Samboju, Nagaraj C.; Choi, Soo-Han; Arikati, Srini; Cilingir, Erdem
2015-03-01
As the technology continues to shrink below 14nm, triple patterning lithography (TPT) is a worthwhile lithography methodology for printing dense layers such as Metal1. However, this increases the complexity of standard cell design, as it is very difficult to develop a TPT compliant layout without compromising on the area. Hence, this emphasizes the importance to have an accurate stitch generation methodology to meet the standard cell area requirement as defined by the technology shrink factor. In this paper, we present an efficient auto TPT stitch guidance generation technique for optimized standard cell design. The basic idea here is to first identify the conflicting polygons based on the Fix Guidance [1] solution developed by Synopsys. Fix Guidance is a reduced sub-graph containing minimum set of edges along with the connecting polygons; by eliminating these edges in a design 3-color conflicts can be resolved. Once the conflicting polygons are identified using this method, they are categorized into four types [2] - (Type 1 to 4). The categorization is based on number of interactions a polygon has with the coloring links and the triangle loops of fix guidance. For each type a certain criteria for keep-out region is defined, based on which the final stitch guidance locations are generated. This technique provides various possible stitch locations to the user and helps the user to select the best stitch location considering both design flexibility (max. pin access/small area) and process-preferences. Based on this technique, a standard cell library for place and route (P and R) can be developed with colorless data and a stitch marker defined by designer using our proposed method. After P and R, the full chip (block) would contain the colorless data and standard cell stitch markers only. These stitch markers are considered as "must be stitch" candidates. Hence during full chip decomposition it is not required to generate and select the stitch markers again for the complete data; therefore, the proposed method reduces the decomposition time significantly.
Designing and optimizing a healthcare kiosk for the community.
Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji
2015-03-01
Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Kyriacou, Andreas; Li Kam Wa, Matthew E; Pabari, Punam A; Unsworth, Beth; Baruah, Resham; Willson, Keith; Peters, Nicholas S; Kanagaratnam, Prapa; Hughes, Alun D; Mayet, Jamil; Whinnett, Zachary I; Francis, Darrel P
2013-08-10
In atrial fibrillation (AF), VV optimization of biventricular pacemakers can be examined in isolation. We used this approach to evaluate internal validity of three VV optimization methods by three criteria. Twenty patients (16 men, age 75 ± 7) in AF were optimized, at two paced heart rates, by LVOT VTI (flow), non-invasive arterial pressure, and ECG (minimizing QRS duration). Each optimization method was evaluated for: singularity (unique peak of function), reproducibility of optimum, and biological plausibility of the distribution of optima. The reproducibility (standard deviation of the difference, SDD) of the optimal VV delay was 10 ms for pressure, versus 8 ms (p=ns) for QRS and 34 ms (p<0.01) for flow. Singularity of optimum was 85% for pressure, 63% for ECG and 45% for flow (Chi(2)=10.9, p<0.005). The distribution of pressure optima was biologically plausible, with 80% LV pre-excited (p=0.007). The distributions of ECG (55% LV pre-excitation) and flow (45% LV pre-excitation) optima were no different to random (p=ns). The pressure-derived optimal VV delay is unaffected by the paced rate: SDD between slow and fast heart rate is 9 ms, no different from the reproducibility SDD at both heart rates. Using non-invasive arterial pressure, VV delay optimization by parabolic fitting is achievable with good precision, satisfying all 3 criteria of internal validity. VV optimum is unaffected by heart rate. Neither QRS minimization nor LVOT VTI satisfy all validity criteria, and therefore seem weaker candidate modalities for VV optimization. AF, unlinking interventricular from atrioventricular delay, uniquely exposes resynchronization concepts to experimental scrutiny. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sandeu, Maurice Marcel; Moussiliou, Azizath; Moiroux, Nicolas; Padonou, Gilles G.; Massougbodji, Achille; Corbel, Vincent; Tuikue Ndam, Nicaise
2012-01-01
Background An accurate method for detecting malaria parasites in the mosquito’s vector remains an essential component in the vector control. The Enzyme linked immunosorbent assay specific for circumsporozoite protein (ELISA-CSP) is the gold standard method for the detection of malaria parasites in the vector even if it presents some limitations. Here, we optimized multiplex real-time PCR assays to accurately detect minor populations in mixed infection with multiple Plasmodium species in the African malaria vectors Anopheles gambiae and Anopheles funestus. Methods Complementary TaqMan-based real-time PCR assays that detect Plasmodium species using specific primers and probes were first evaluated on artificial mixtures of different targets inserted in plasmid constructs. The assays were further validated in comparison with the ELISA-CSP on 200 field caught Anopheles gambiae and Anopheles funestus mosquitoes collected in two localities in southern Benin. Results The validation of the duplex real-time PCR assays on the plasmid mixtures demonstrated robust specificity and sensitivity for detecting distinct targets. Using a panel of mosquito specimen, the real-time PCR showed a relatively high sensitivity (88.6%) and specificity (98%), compared to ELISA-CSP as the referent standard. The agreement between both methods was “excellent” (κ = 0.8, P<0.05). The relative quantification of Plasmodium DNA between the two Anopheles species analyzed showed no significant difference (P = 0, 2). All infected mosquito samples contained Plasmodium falciparum DNA and mixed infections with P. malariae and/or P. ovale were observed in 18.6% and 13.6% of An. gambiae and An. funestus respectively. Plasmodium vivax was found in none of the mosquito samples analyzed. Conclusion This study presents an optimized method for detecting the four Plasmodium species in the African malaria vectors. The study highlights substantial discordance with traditional ELISA-CSP pointing out the utility of employing an accurate molecular diagnostic tool for detecting malaria parasites in field mosquito populations. PMID:23285168
Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.
Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F
2015-11-20
In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Durana, Nieves; García, José Antonio; Gómez, María Carmen; Alonso, Lucio
2018-01-01
Thermal desorption (TD) coupled with gas chromatography/mass spectrometry (TD-GC/MS) is a simple alternative that overcomes the main drawbacks of the solvent extraction-based method: long extraction times, high sample manipulation, and large amounts of solvent waste. This work describes the optimization of TD-GC/MS for the measurement of airborne polycyclic aromatic hydrocarbons (PAHs) in particulate phase. The performance of the method was tested by Standard Reference Material (SRM) 1649b urban dust and compared with the conventional method (Soxhlet extraction-GC/MS), showing a better recovery (mean of 97%), precision (mean of 12%), and accuracy (±25%) for the determination of 14 EPA PAHs. Furthermore, other 15 nonpriority PAHs were identified and quantified using their relative response factors (RRFs). Finally, the proposed method was successfully applied for the quantification of PAHs in real 8 h-samples (PM10), demonstrating its capability for determination of these compounds in short-term monitoring. PMID:29854561
Zeng, Xiaozheng Jenny; Li, Jian; McGough, Robert J
2010-01-01
A waveform-diversity-based approach for 3-D tumor heating is compared to spot scanning for hyperthermia applications. The waveform diversity method determines the excitation signals applied to the phased array elements and produces a beam pattern that closely matches the desired power distribution. The optimization algorithm solves the covariance matrix of the excitation signals through semidefinite programming subject to a series of quadratic cost functions and constraints on the control points. A numerical example simulates a 1444-element spherical-section phased array that delivers heat to a 3-cm-diameter spherical tumor located 12 cm from the array aperture, and the results show that waveform diversity combined with mode scanning increases the heated volume within the tumor while simultaneously decreasing normal tissue heating. Whereas standard single focus and multiple focus methods are often associated with unwanted intervening tissue heating, the waveform diversity method combined with mode scanning shifts energy away from intervening tissues where hotspots otherwise accumulate to improve temperature localization in deep-seated tumors.
A geometrically based method for automated radiosurgery planning.
Wagner, T H; Yi, T; Meeks, S L; Bova, F J; Brechner, B L; Chen, Y; Buatti, J M; Friedman, W A; Foote, K D; Bouchet, L G
2000-12-01
A geometrically based method of multiple isocenter linear accelerator radiosurgery treatment planning optimization was developed, based on a target's solid shape. Our method uses an edge detection process to determine the optimal sphere packing arrangement with which to cover the planning target. The sphere packing arrangement is converted into a radiosurgery treatment plan by substituting the isocenter locations and collimator sizes for the spheres. This method is demonstrated on a set of 5 irregularly shaped phantom targets, as well as a set of 10 clinical example cases ranging from simple to very complex in planning difficulty. Using a prototype implementation of the method and standard dosimetric radiosurgery treatment planning tools, feasible treatment plans were developed for each target. The treatment plans generated for the phantom targets showed excellent dose conformity and acceptable dose homogeneity within the target volume. The algorithm was able to generate a radiosurgery plan conforming to the Radiation Therapy Oncology Group (RTOG) guidelines on radiosurgery for every clinical and phantom target examined. This automated planning method can serve as a valuable tool to assist treatment planners in rapidly and consistently designing conformal multiple isocenter radiosurgery treatment plans.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
Method of Reproduction of the Luminous Flux of the LED Light Sources by a Spherical Photometer
NASA Astrophysics Data System (ADS)
Huriev, M.; Neyezhmakov, P.
2018-02-01
In connection with transition to energy-efficient temporally stable light-emitting diodes (LEDs) lighting, a problem of ensuring the traceability of results of measurement of characteristics of light sources arises. The problem is related to existing measurement standards of luminous flux based on spherical photometers optimized for the reference incandescent lamps with a relative spectral characteristic different from the spectrum of the LEDs. We propose a method for reproduction of the luminous flux, which solves this problem.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
NASA Astrophysics Data System (ADS)
Besemer, Abigail E.
Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre-clinical and clinical patients and large dosimetric differences resulted when using conventional organ-level methods and the patient-specific voxelized methods described in this work. The dosimetric impact of various steps in the 3D voxelized dosimetry process were evaluated including quantitative imaging acquisition, image coregistration, voxel resampling, ROI contouring, CT-based material segmentation, and pharmacokinetic fitting. Finally, a multi-objective treatment planning optimization framework was developed for multi-radiopharmaceutical combination therapies.
Smartphone assessment of knee flexion compared to radiographic standards.
Dietz, Matthew J; Sprando, Daniel; Hanselman, Andrew E; Regier, Michael D; Frye, Benjamin M
2017-03-01
Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC=0.94; 95% CI: 0.91-0.96). Visual estimation was found to be the least reliable method of measurement. The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. Copyright © 2016 Elsevier B.V. All rights reserved.
Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly
2015-12-18
This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard
2010-12-01
Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano
2017-09-21
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.
Somovilla Gómez, Fátima
2017-01-01
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming
NASA Astrophysics Data System (ADS)
Hubicki, Christian; Goldman, Daniel; Ames, Aaron
In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.
Noise in Charge Amplifiers— A gm/ID Approach
NASA Astrophysics Data System (ADS)
Alvarez, Enrique; Avila, Diego; Campillo, Hernan; Dragone, Angelo; Abusleme, Angel
2012-10-01
Charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments. In a typical front-end, the noise due to the charge amplifier, and particularly from its input transistor, limits the achievable resolution. The classic approach to attenuate noise effects in MOSFET charge amplifiers is to use the maximum power available, to use a minimum-length input device, and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node. These conclusions, reached by analysis based on simple noise models, lead to sub-optimal results. In this work, a new approach on noise analysis for charge amplifiers based on an extension of the gm/ID methodology is presented. This method combines circuit equations and results from SPICE simulations, both valid for all operation regions and including all noise sources. The method, which allows to find the optimal operation point of the charge amplifier input device for maximum resolution, shows that the minimum device length is not necessarily the optimal, that flicker noise is responsible for the non-monotonic noise versus current function, and provides a deeper insight on the noise limits mechanism from an alternative and more design-oriented point of view.
Anam, Khairul; Al-Jumaily, Adel
2014-01-01
The use of a small number of surface electromyography (EMG) channels on the transradial amputee in a myoelectric controller is a big challenge. This paper proposes a pattern recognition system using an extreme learning machine (ELM) optimized by particle swarm optimization (PSO). PSO is mutated by wavelet function to avoid trapped in a local minima. The proposed system is used to classify eleven imagined finger motions on five amputees by using only two EMG channels. The optimal performance of wavelet-PSO was compared to a grid-search method and standard PSO. The experimental results show that the proposed system is the most accurate classifier among other tested classifiers. It could classify 11 finger motions with the average accuracy of about 94 % across five amputees.
Synthetic optimization of air turbine for dental handpieces.
Shi, Z Y; Dong, T
2014-01-01
A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
Sun, Ting; Sun, Hefeng; Zhao, Feng
2017-09-01
In this work, reduced graphene oxide coated with ZnO nanocomposites was used as an efficient sorbent of dispersive solid-phase extraction and successfully applied for the extraction of organochlorine pesticides from apple juice followed by gas chromatography with mass spectrometry. Several experimental parameters affecting the extraction efficiencies, including the amount of adsorbent, extraction time, and the pH of the sample solution, as well as the type and volume of eluent solvent, were investigated and optimized. Under the optimal experimental conditions, good linearity existed in the range of 1.0-200.0 ng/mL for all the analytes with the correlation coefficients (R 2 ) ranging from 0.9964 to 0.9994. The limits of detection of the method for the compounds were 0.011-0.053 ng/mL. Good reproducibilities were acquired with relative standard deviations below 8.7% for both intraday and interday precision. The recoveries of the method were in the range of 78.1-105.8% with relative standard deviations of 3.3-6.9%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DNA melting analysis: application of the "open tube" format for detection of mutant KRAS.
Botezatu, Irina V; Kondratova, Valentina N; Shelepov, Valery P; Lichtenstein, Anatoly V
2011-12-15
High-resolution melting (HRM) analysis is a very effective method for genotyping and mutation scanning that is usually performed just after PCR amplification (the "closed tube" format). Though simple and convenient, the closed tube format makes the HRM dependent on the PCR mix, not generally optimal for DNA melting analysis. Here, the "open tube" format, namely the post-PCR optimization procedure (amplicon shortening and solution chemistry modification), is proposed. As a result, mutation scanning of short amplicons becomes feasible on a standard real-time PCR instrument (not primarily designed for HRM) using SYBR Green I. This approach has allowed us to considerably enhance the sensitivity of detecting mutant KRAS using both low- and high-resolution systems (the Bio-Rad iQ5-SYBR Green I and Bio-Rad CFX96-EvaGreen, respectively). The open tube format, though more laborious than the closed tube one, can be used in situations when maximal sensitivity of the method is needed. It also permits standardization of DNA melting experiments and the introduction of instruments of a "lower level" into the range of those suitable for mutation scanning. Copyright © 2011 Elsevier Inc. All rights reserved.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
Tsang, Chehong; Shehata, Medhat H.; Lotfy, Abdurrahmaan
2016-01-01
The lack of a standard test method for evaluating the resistance of pervious concrete to cycles of freezing and thawing in the presence of deicing salts is the motive behind this study. Different sample size and geometry, cycle duration, and level of submersion in brine solutions were investigated to achieve an optimized test method. The optimized test method was able to produce different levels of damage when different types of deicing salts were used. The optimized duration of one cycle was found to be 24 h with twelve hours of freezing at −18 °C and twelve hours of thawing at +21 °C, with the bottom 10 mm of the sample submerged in the brine solution. Cylinder samples with a diameter of 100 mm and height of 150 mm were used and found to produce similar results to 150 mm-cubes. Based on the obtained results a mass loss of 3%–5% is proposed as a failure criterion of cylindrical samples. For the materials and within the cycles of freezing/thawing investigated here, the deicers that caused the most damage were NaCl, CaCl2 and urea, followed by MgCl2, potassium acetate, sodium acetate and calcium-magnesium acetate. More testing is needed to validate the effects of different deicers under long term exposures and different temperature ranges. PMID:28773998
State-selective optimization of local excited electronic states in extended systems
NASA Astrophysics Data System (ADS)
Kovyrshin, Arseny; Neugebauer, Johannes
2010-11-01
Standard implementations of time-dependent density-functional theory (TDDFT) for the calculation of excitation energies give access to a number of the lowest-lying electronic excitations of a molecule under study. For extended systems, this can become cumbersome if a particular excited state is sought-after because many electronic transitions may be present. This often means that even for systems of moderate size, a multitude of excited states needs to be calculated to cover a certain energy range. Here, we present an algorithm for the selective determination of predefined excited electronic states in an extended system. A guess transition density in terms of orbital transitions has to be provided for the excitation that shall be optimized. The approach employs root-homing techniques together with iterative subspace diagonalization methods to optimize the electronic transition. We illustrate the advantages of this method for solvated molecules, core-excitations of metal complexes, and adsorbates at cluster surfaces. In particular, we study the local π →π∗ excitation of a pyridine molecule adsorbed at a silver cluster. It is shown that the method works very efficiently even for high-lying excited states. We demonstrate that the assumption of a single, well-defined local excitation is, in general, not justified for extended systems, which can lead to root-switching during optimization. In those cases, the method can give important information about the spectral distribution of the orbital transition employed as a guess.
Soares, Aline Rodrigues; Nascentes, Clésia Cristina
2013-02-15
A simple method was developed for determining the total lead content in lipstick samples by graphite furnace atomic absorption spectrometry (GFAAS) after treatment with tetramethylammonium hydroxide (TMAH). Multivariate optimization was used to establish the optimal conditions of sample preparation. The graphite furnace heating program was optimized through pyrolysis and atomization curves. An aliquot containing approximately 50mg of the sample was mixed with TMAH and heated in a water bath at 60°C for 60 min. Using Nb as the permanent modifier and Pd as the chemical modifier, the optimal temperatures were 900°C and 1800°C for pyrolysis and atomization, respectively. Under optimum conditions, the working range was from 1.73 to 50.0 μg L(-1), with detection and quantification limits of 0.20 and 0.34 μg g(-1), respectively. The precision was evaluated under conditions of repeatability and intermediate precision and showed standard deviations of 2.37%-4.61% and 4.93%-9.75%, respectively. The % recovery ranged from 96.2% to 109%, and no significant differences were found between the results obtained using the proposed method and the microwave decomposition method for real samples. Lead was detected in 21 tested lipstick samples; the lead content in these samples ranged from 0.27 to 4.54 μg g(-1). Copyright © 2012 Elsevier B.V. All rights reserved.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Pushpalatha, Hulikal Basavarajaiah; Pramod, Kumar; Sundaram, Ramachandran; Shyam, Ramakrishnan
2014-10-01
Irradiation and use of preservatives are routine procedures to control bio-burden in solid herbal dosage forms. Use of steam or pasteurization is even though reported in the literature, not many studies are available with respect to its application in reducing the bio-burden in herbal drug formulations. Hence, we undertook a series of studies to explore the suitability of pasteurization as a method to reduce bio-burden during formulation and development of herbal dosage forms, which will pave the way for preparing preservative-free formulations. Optimized Ashoka (Saraca indica) tablets were formulated and developed. The optimized formula was then subjected to pasteurization during formulation, with an aim to keep the microbial count well within the limits of pharmacopoeial standards. Then, three variants of the optimized Ashoka formulation - with preservative, without preservative and formulation without preservative and subjected to pasteurization, were compared by routine in-process parameters and stability studies. The results obtained indicate that Ashoka tablets manufactured by inclusion of the pasteurization technique not only showed the bio-burden to be within the limits of pharmacopoeial standards, but also exhibited the compliance with other parameters, such as stability and quality. The outcome of this pilot study shows that pasteurization can be employed as a distinctive method for reducing bio-burden during the formulation and development of herbal dosage forms, such as tablets.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Implementation of Advanced Inventory Management Functionality in Automated Dispensing Cabinets
Webb, Aaron; Lund, Jim
2015-01-01
Background: Automated dispensing cabinets (ADCs) are an integral component of distribution models in pharmacy departments across the country. There are significant challenges to optimizing ADC inventory management while minimizing use of labor and capital resources. The role of enhanced inventory control functionality is not fully defined. Objective: The aim of this project is to improve ADC inventory management by leveraging dynamic inventory standards and a low inventory alert platform. Methods: Two interventional groups and 1 historical control were included in the study. Each intervention group consisted of 6 ADCs that tested enhanced inventory management functionality. Interventions included dynamic inventory standards and a low inventory alert messaging system. Following separate implementation of each platform, dynamic inventory and low inventory alert systems were applied concurrently to all 12 ADCs. Outcome measures included number and duration of daily stockouts, ADC inventory turns, and number of phone calls related to stockouts received by pharmacy staff. Results: Low inventory alerts reduced both the number and duration of stockouts. Dynamic inventory standards reduced the number of daily stockouts without changing the inventory turns and duration of stockouts. No change was observed in number of calls related to stockouts made to pharmacy staff. Conclusions: Low inventory alerts and dynamic inventory standards are feasible mechanisms to help optimize ADC inventory management while minimizing labor and capital resources. PMID:26448672
Determination of Ochratoxin A in Rye and Rye-Based Products by Fluorescence Polarization Immunoassay
Lippolis, Vincenzo; Porricelli, Anna C. R.; Cortese, Marina; Zanardi, Sandro; Pascale, Michelangelo
2017-01-01
A rapid fluorescence polarization immunoassay (FPIA) was optimized and validated for the determination of ochratoxin A (OTA) in rye and rye crispbread. Samples were extracted with a mixture of acetonitrile/water (60:40, v/v) and purified by SPE-aminopropyl column clean-up before performing the FPIA. Overall mean recoveries were 86 and 95% for spiked rye and rye crispbread with relative standard deviations lower than 6%. Limits of detection (LOD) of the optimized FPIA was 0.6 μg/kg for rye and rye crispbread, respectively. Good correlations (r > 0.977) were observed between OTA contents in contaminated samples obtained by FPIA and high-performance liquid chromatography (HPLC) with immunoaffinity cleanup used as reference method. Furthermore, single laboratory validation and small-scale collaborative trials were carried out for the determination of OTA in rye according to Regulation 519/2014/EU laying down procedures for the validation of screening methods. The precision profile of the method, cut-off level and rate of false suspect results confirm the satisfactory analytical performances of assay as a screening method. These findings show that the optimized FPIA is suitable for high-throughput screening, and permits reliable quantitative determination of OTA in rye and rye crispbread at levels that fall below the EU regulatory limits. PMID:28954398
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng
2014-01-01
This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP). PMID:25256108
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
Zheng, Qianwang; Mikš-Krajnik, Marta; Yang, Yishan; Xu, Wang; Yuk, Hyun-Gyun
2014-09-01
Conventional culture detection methods are time consuming and labor-intensive. For this reason, an alternative rapid method combining real-time PCR and immunomagnetic separation (IMS) was investigated in this study to detect both healthy and heat-injured Salmonella Typhimurium on raw duck wings. Firstly, the IMS method was optimized by determining the capture efficiency of Dynabeads(®) on Salmonella cells on raw duck wings with different bead incubation (10, 30 and 60 min) and magnetic separation (3, 10 and 30 min) times. Secondly, three Taqman primer sets, Sal, invA and ttr, were evaluated to optimize the real-time PCR protocol by comparing five parameters: inclusivity, exclusivity, PCR efficiency, detection probability and limit of detection (LOD). Thirdly, the optimized real-time PCR, in combination with IMS (PCR-IMS) assay, was compared with a standard ISO and a real-time PCR (PCR) method by analyzing artificially inoculated raw duck wings with healthy and heat-injured Salmonella cells at 10(1) and 10(0) CFU/25 g. Finally, the optimized PCR-IMS assay was validated for Salmonella detection in naturally contaminated raw duck wing samples. Under optimal IMS conditions (30 min bead incubation and 3 min magnetic separation times), approximately 85 and 64% of S. Typhimurium cells were captured by Dynabeads® from pure culture and inoculated raw duck wings, respectively. Although Sal and ttr primers exhibited 100% inclusivity and exclusivity for 16 Salmonella spp. and 36 non-Salmonella strains, the Sal primer showed lower LOD (10(3) CFU/ml) and higher PCR efficiency (94.1%) than the invA and ttr primers. Moreover, for Sal and invA primers, 100% detection probability on raw duck wings suspension was observed at 10(3) and 10(4) CFU/ml with and without IMS, respectively. Thus, the Sal primer was chosen for further experiments. The optimized PCR-IMS method was significantly (P=0.0011) better at detecting healthy Salmonella cells after 7-h enrichment than traditional PCR method. However there was no significant difference between the two methods with longer enrichment time (14 h). The diagnostic accuracy of PCR-IMS was shown to be 98.3% through the validation study. These results indicate that the optimized PCR-IMS method in this study could provide a sensitive, specific and rapid detection method for Salmonella on raw duck wings, enabling 10-h detection. However, a longer enrichment time could be needed for resuscitation and reliable detection of heat-injured cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Serum Dried Samples to Detect Dengue Antibodies: A Field Study
Maldonado-Rodríguez, Angelica; Rojas-Montes, Othon; Chavez-Negrete, Adolfo; Rojas-Uribe, Magdalena; Posadas-Mondragon, Araceli; Aguilar-Faisal, Leopoldo; Xoconostle-Cazares, Beatriz
2017-01-01
Background Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs). Methods Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. Results DSS elution conditions were standardized as follows: 1 h at 4°C in 200 µl of DNase-, RNase-, and protease-free PBS (1x). The optimal volume of DSS eluate to be used in the IgG assay was 40 µl. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. Conclusion DSS samples are useful for detecting anti-dengue IgG antibodies in the field. PMID:28630868
Acoustic-noise-optimized diffusion-weighted imaging.
Ott, Martin; Blaimer, Martin; Grodzki, David M; Breuer, Felix A; Roesch, Julie; Dörfler, Arnd; Heismann, Björn; Jakob, Peter M
2015-12-01
This work was aimed at reducing acoustic noise in diffusion-weighted MR imaging (DWI) that might reach acoustic noise levels of over 100 dB(A) in clinical practice. A diffusion-weighted readout-segmented echo-planar imaging (EPI) sequence was optimized for acoustic noise by utilizing small readout segment widths to obtain low gradient slew rates and amplitudes instead of faster k-space coverage. In addition, all other gradients were optimized for low slew rates. Volunteer and patient imaging experiments were conducted to demonstrate the feasibility of the method. Acoustic noise measurements were performed and analyzed for four different DWI measurement protocols at 1.5T and 3T. An acoustic noise reduction of up to 20 dB(A) was achieved, which corresponds to a fourfold reduction in acoustic perception. The image quality was preserved at the level of a standard single-shot (ss)-EPI sequence, with a 27-54% increase in scan time. The diffusion-weighted imaging technique proposed in this study allowed a substantial reduction in the level of acoustic noise compared to standard single-shot diffusion-weighted EPI. This is expected to afford considerably more patient comfort, but a larger study would be necessary to fully characterize the subjective changes in patient experience.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
NASA Astrophysics Data System (ADS)
Chintalapudi, V. S.; Sirigiri, Sivanagaraju
2017-04-01
In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.
Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng
2017-10-13
MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.
Sreenivasa, Manish; Millard, Matthew; Felis, Martin; Mombaur, Katja; Wolf, Sebastian I.
2017-01-01
Predicting the movements, ground reaction forces and neuromuscular activity during gait can be a valuable asset to the clinical rehabilitation community, both to understand pathology, as well as to plan effective intervention. In this work we use an optimal control method to generate predictive simulations of pathological gait in the sagittal plane. We construct a patient-specific model corresponding to a 7-year old child with gait abnormalities and identify the optimal spring characteristics of an ankle-foot orthosis that minimizes muscle effort. Our simulations include the computation of foot-ground reaction forces, as well as the neuromuscular dynamics using computationally efficient muscle torque generators and excitation-activation equations. The optimal control problem (OCP) is solved with a direct multiple shooting method. The solution of this problem is physically consistent synthetic neural excitation commands, muscle activations and whole body motion. Our simulations produced similar changes to the gait characteristics as those recorded on the patient. The orthosis-equipped model was able to walk faster with more extended knees. Notably, our approach can be easily tuned to simulate weakened muscles, produces physiologically realistic ground reaction forces and smooth muscle activations and torques, and can be implemented on a standard workstation to produce results within a few hours. These results are an important contribution toward bridging the gap between research methods in computational neuromechanics and day-to-day clinical rehabilitation. PMID:28450833
Siddiqui, Mohammad Jamshed Ahmad; Ismail, Zhari; Saidan, Noor Hafizoh
2011-01-01
Background: Vinca rosea (Apocynaceae) is one of the most important and high value medicinal plants known for its anticancer alkaloids. It is the iota of the isolated secondary metabolites used in chemotherapy to treat diverse cancers. Several high performance liquid chromatography (HPLC) methods have been developed to quantify the active alkaloids in the plant. However, this method may serve the purpose in quantification of V. rosea plant extracts in totality. Objective: To develop and validate the reverse phase (RP)-HPLC method for simultaneous determination of secondary metabolites, namely alkaloids from V. rosea plant extracts. Materials and Methods: The quantitative determination was conducted by RP-HPLC equipped with ultraviolet detector. Optimal separation was achieved by isocratic elution with mobile phase consisting of methanol:acetonitrile:ammonium acetate buffer (25 mM) with 0.1% triethylamine (15:45:40 v/v) on a column (Zorbax Eclipse plus C18, 250 mm % 4.6 mm; 5 μm). The standard markers (vindoline, vincristine, catharanthine, and vinblastine) were identified by retention time and co-injected with reference standard and quantified by external standard method at 297 nm. Results: The precision of the method was confirmed by the relative standard deviation (R.S.D.), which was lower than 2.68%. The recoveries were in the range of 98.09%-108%. The limits of detection (LOD) for each marker alkaloids were lower than 0.20 μg. Different parts of the V. rosea extracts shows different concentrations of markers, flower samples were high in vinblastine content, while methanol extract from the leaves contains all the four alkaloids in good yield, and there is no significant presence of markers in water extracts. Conclusion: HPLC method established is appropriate for the standardization and quality assurance of V. rosea plant extracts. PMID:21716929
SRS modeling in high power CW fiber lasers for component optimization
NASA Astrophysics Data System (ADS)
Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.
2017-02-01
A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.
Optimal Mortgage Refinancing: A Closed Form Solution.
Agarwal, Sumit; Driscoll, John C; Laibson, David I
2013-06-01
We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least [Formula: see text] In this formula W (.) is the Lambert W -function, [Formula: see text] ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods.
Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem
NASA Astrophysics Data System (ADS)
Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.
2017-03-01
Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.
Phakthong, Wilaiwan; Liawruangrath, Boonsom; Liawruangrath, Saisunee
2014-12-01
A reversed flow injection (rFI) system was designed and constructed for gallic acid determination. Gallic acid was determined based on the formation of chromogen between gallic acid and rhodanine, resulting in a colored product with a λmax at 520 nm. The optimum conditions for determining gallic acid were also investigated. Optimizations of the experimental conditions were carried out based on the so-call univariate method. The conditions obtained were 0.6% (w/v) rhodanine, 70% (v/v) ethanol, 0.9 mol L(-1) NaOH, 2.0 mL min(-1) flow rate, 75 μL injection loop and 600 cm mixing tubing length, respectively. Comparative optimizations of the experimental conditions were also carried out by multivariate or simplex optimization method. The conditions obtained were 1.2% (w/v) rhodanine, 70% (v/v) ethanol, 1.2 mol L(-1) NaOH, flow rate 2.5 mL min(-1), 75 μL injection loop and 600 cm mixing tubing length, respectively. It was found that the optimum conditions obtained by the former optimization method were mostly similar to those obtained by the latter method. The linear relationship between peak height and the concentration of gallic acid was obtained over the range of 0.1-35.0 mg L(-1) with the detection limit 0.081 mg L(-1). The relative standard deviations were found to be in the ranges 0.46-1.96% for 1, 10, 30 mg L(-1) of gallic acid (n=11). The method has the advantages of simplicity extremely high selectivity and high precision. The proposed method was successfully applied to the determination of gallic acid in longan samples without interferent effects from other common phenolic compounds that might be present in the longan samples collected in northern Thailand. Copyright © 2014 Elsevier B.V. All rights reserved.
Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike
2017-01-01
During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
NASA Technical Reports Server (NTRS)
Hill, Charles S.; Oliveras, Ovidio M.
2011-01-01
Evolution of the 3D strain field during ASTM-D-7078 v-notch rail shear tests on 8-ply quasi-isotropic carbon fiber/epoxy laminates was determined by optical photogrammetry using an ARAMIS system. Specimens having non-optimal geometry and minor discrepancies in dimensional tolerances were shown to display non-symmetry and/or stress concentration in the vicinity of the notch relative to a specimen meeting the requirements of the standard, but resulting shear strength and modulus values remained within acceptable bounds of standard deviation. Based on these results, and reported difficulty machining specimens to the required tolerances using available methods, it is suggested that a parametric study combining analytical methods and experiment may provide rationale to increase the tolerances on some specimen dimensions, reducing machining costs, increasing the proportion of acceptable results, and enabling a wider adoption of the test method.
Heinänen, M; Barbas, C
2001-03-01
A method is described for ambroxol, trans-4-(2-amino-3,5-dibromobenzylamino) cyclohexanol hydrochloride, and benzoic acid separation by HPLC with UV detection at 247 nm in a syrup as pharmaceutical presentation. Optimal conditions were: Column Symmetry Shield RPC8, 5 microm 250 x 4.6 mm, and methanol/(H(3)PO(4) 8.5 mM/triethylamine pH=2.8) 40:60 v/v. Validation was performed using standards and the pharmaceutical preparation which contains the compounds described above. Results from both standards and samples show suitable validation parameters. The pharmaceutical grade substances were tested by factors that could influence the chemical stability. These reaction mixtures were analysed to evaluate the capability of the method to separate degradation products. Degradation products did not interfere with the determination of the substances tested by the assay.
Shama, S A
2002-11-07
A simple and rapid spectrophotometric methods have been estimated for the microdetermination of phenylephrine HCl (I) and orphenadrine citrate (II). The proposed methods are based on the formation of ion-pair complexes between the examined drugs with alizarine (Aliz), alizarine red S (ARS), alizarine yellow G (AYG) or quinalizarine (Qaliz), which can be measured at the optimum lambda(max). The optimization of the reaction conditions is investigated. Beer's law is obeyed in the concentration ranges 2-36 microgram ml(-1), whereas optimum concentration as adopted from Ringbom plots was 3.5-33 microgram ml(-1). The molar absorptivity, Sandell sensitivity, and detection limit are also calculated. The correlation coefficient was >/=0.9988 (n=6) with a relative standard deviation of =1.7, for six determinations of 20 microgram ml(-1). The proposed methods are successfully applied to the determination of drugs I and II in their dosage forms using the standard addition technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papp, D; Unkelbach, J
2014-06-01
Purpose: Non-uniform fractionation, i.e. delivering distinct dose distributions in two subsequent fractions, can potentially improve outcomes by increasing biological dose to the target without increasing dose to healthy tissues. This is possible if both fractions deliver a similar dose to normal tissues (exploit the fractionation effect) but high single fraction doses to subvolumes of the target (hypofractionation). Optimization of such treatment plans can be formulated using biological equivalent dose (BED), but leads to intractable nonconvex optimization problems. We introduce a novel optimization approach to address this challenge. Methods: We first optimize a reference IMPT plan using standard techniques that deliversmore » a homogeneous target dose in both fractions. The method then divides the pencil beams into two sets, which are assigned to either fraction one or fraction two. The total intensity of each pencil beam, and therefore the physical dose, remains unchanged compared to the reference plan. The objectives are to maximize the mean BED in the target and to minimize the mean BED in normal tissues, which is a quadratic function of the pencil beam weights. The optimal reassignment of pencil beams to one of the two fractions is formulated as a binary quadratic optimization problem. A near-optimal solution to this problem can be obtained by convex relaxation and randomized rounding. Results: The method is demonstrated for a large arteriovenous malformation (AVM) case treated in two fractions. The algorithm yields a treatment plan, which delivers a high dose to parts of the AVM in one of the fractions, but similar doses in both fractions to the normal brain tissue adjacent to the AVM. Using the approach, the mean BED in the target was increased by approximately 10% compared to what would have been possible with a uniform reference plan for the same normal tissue mean BED.« less
Qi, Feifei; Jian, Ningge; Qian, Liangliang; Cao, Weixin; Xu, Qian; Li, Jian
2017-09-01
A simple and efficient three-step sample preparation method was developed and optimized for the simultaneous analysis of illegal anionic and cationic dyes (acid orange 7, metanil yellow, auramine-O, and chrysoidine) in food samples. A novel solid-phase extraction (SPE) procedure based on nanofibers mat (NFsM) was proposed after solvent extraction and freeze-salting out purification. The preferred SPE sorbent was selected from five functionalized NFsMs by orthogonal experimental design, and the optimization of SPE parameters was achieved through response surface methodology (RSM) based on the Box-Behnken design (BBD). Under the optimal conditions, the target analytes could be completely adsorbed by polypyrrole-functionalized polyacrylonitrile NFsM (PPy/PAN NFsM), and the eluent was directly analyzed by high-performance liquid chromatography-diode array detection (HPLC-DAD). The limits of detection (LODs) were between 0.002 and 0.01 mg kg -1 , and satisfactory linearity with correlation coefficients (R > 0.99) for each dye in all samples was achieved. Compared with the Chinese standard method and the published methods, the proposed method was simplified greatly with much lower requirement of sorbent (5.0 mg) and organic solvent (2.8 mL) and higher sample preparation speed (10 min/sample), while higher recovery (83.6-116.5%) and precision (RSDs < 7.1%) were obtained. With this developed method, we have successfully detected illegal ionic dyes in three common representative foods: yellow croaker, soybean products, and chili seasonings. Graphical abstract Schematic representation of the process of the three-step sample preparation.
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
Dynamic optimization case studies in DYNOPT tool
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.
Conventional and molecular diagnostic strategies for prosthetic joint infections.
Esteban, Jaime; Sorlí, Luisa; Alentorn-Geli, Eduard; Puig, Lluís; Horcajada, Juan P
2014-01-01
An accurate diagnosis of prosthetic joint infection (PJI) is the mainstay for an optimized clinical management. This review analyzes different diagnostic strategies of PJI, with special emphasis on molecular diagnostic tools and their current and future applications. Until now, the culture of periprosthetic tissues has been considered the gold standard for the diagnosis of PJI. However, sonication of the implant increases the sensitivity of those cultures and is being increasingly adopted by many centers. Molecular diagnostic methods compared with intraoperative tissue culture, especially if combined with sonication, have a higher sensitivity, a faster turnaround time and are not influenced by previous antimicrobial therapy. However, they still lack a system for detection of antimicrobial susceptibility, which is crucial for an optimized and less toxic therapy of PJI. More studies are needed to assess the clinical value of these methods and their cost-effectiveness.
Accurate EPR radiosensitivity calibration using small sample masses
NASA Astrophysics Data System (ADS)
Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.
2000-03-01
We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.
A new MUSIC electromagnetic imaging method with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Zhong, Yu; Chen, Xudong
2008-11-01
This paper investigates the influence of test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply.
Intracycle angular velocity control of cross-flow turbines
NASA Astrophysics Data System (ADS)
Strom, Benjamin; Brunton, Steven L.; Polagye, Brian
2017-08-01
Cross-flow turbines, also known as vertical-axis turbines, are attractive for power generation from wind and water currents. Some cross-flow turbine designs optimize unsteady fluid forces and maximize power output by controlling blade kinematics within one rotation. One established method is to dynamically pitch the blades. Here we introduce a mechanically simpler alternative: optimize the turbine rotation rate as a function of angular blade position. We demonstrate experimentally that this approach results in a 59% increase in power output over standard control methods. Analysis of fluid forcing and blade kinematics suggest that power increase is achieved through modification of the local flow conditions and alignment of fluid force and rotation rate extrema. The result is a low-speed, structurally robust turbine that achieves high efficiency and could enable a new generation of environmentally benign turbines for renewable power generation.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Chen, Wen; Zhu, Ming-Dong; Yan, Xiao-Lan; Lin, Li-Jun; Zhang, Jian-Feng; Li, Li; Wen, Li-Yong
2011-06-01
To understand and evaluate the quality of feces examination for schistosomiasis in province-level laboratories of Zhejiang Province. With the single-blind method, the stool samples were detected by the stool hatching method and sediment detection method. In the 3 quality control assessments in 2006, 2008 and 2009, most laboratories finished the examinations on time. The accordance rates of detections were 88.9%, 100% and 93.9%, respectively. The province-level laboratories for schistosomiasis feces examination of Zhejiang Province is coming into standardization, and the techniques of schistosomiasis feces examination are optimized gradually.
[Evaluation of inflammatory cells (tumor infiltrating lymphocytes - TIL) in malignant melanoma].
Dundr, Pavel; Němejcová, Kristýna; Bártů, Michaela; Tichá, Ivana; Jakša, Radek
2018-01-01
The evaluation of inflammatory infiltrate (tumor infiltrating lymphocytes - TIL) should be a standard part of biopsy examination for malignant melanoma. Currently, the most commonly used assessment method according to Clark is not optimal and there have been attempts to find an alternative system. Here we present an overview of possible approaches involving five different evaluation methods based on hematoxylin-eosin staining, including the recent suggestion of unified TIL evaluation method for all solid tumors. The issue of methodology, prognostic and predictive significance of TIL determination as well as the importance of immunohistochemical subtyping of inflammatory infiltrate is discussed.
Chen, Yonggang; Meng, Junhua; Zou, Jili; An, Jing
2015-06-01
Hordenine is an active compound found in several foods, herbs and beer. In this work, a novel sorbent was fabricated for selective solid-phase extraction (SPE) of hordenine in biological samples. The organic polymer sorbent was synthesized in one step in the plastic barrel of a syringe by a pre-polymerization solution consisting of methacrylic acid (MAA), 4-vinylphenylboronic acid (VB) and ethylene glycol dimethacrylate (EGDMA). The conditions for preparation were optimized to generate a poly(MAA-VB-EGMDA) monolith with good permeability. The monolith exhibited good enrichment efficiency towards hordenine. By using tyramine as the internal standard, a poly(MAA-VB-EGMDA)-based SPE-HPLC method was established for analysis of hordenine. Conditions for SPE, including volume of eluting solvent, pH of sample solution, sampling rate and sample volume, were optimized. The proposed SPE-HPLC method presented good linearity (R(2) = 0.9992) within 10-2000 ng/mL and the detection limits was 3 ng/mL, which is significantly more sensitive than reported methods. The method was also applied in plasma and urine samples; good capability of removing matrices was observed, while hordenine in low content was well extracted and enriched. The recoveries were from 90.6 to 94.7% and from 89.3 to 91.5% for the spiked plasma and urine samples, respectively, with the relative standard deviations <4.7%. Copyright © 2014 John Wiley & Sons, Ltd.
SKYNET: an efficient and robust neural network training tool for machine learning in astronomy
NASA Astrophysics Data System (ADS)
Graff, Philip; Feroz, Farhan; Hobson, Michael P.; Lasenby, Anthony
2014-06-01
We present the first public release of our generic neural network training algorithm, called SKYNET. This efficient and robust machine learning tool is able to train large and deep feed-forward neural networks, including autoencoders, for use in a wide range of supervised and unsupervised learning applications, such as regression, classification, density estimation, clustering and dimensionality reduction. SKYNET uses a `pre-training' method to obtain a set of network parameters that has empirically been shown to be close to a good solution, followed by further optimization using a regularized variant of Newton's method, where the level of regularization is determined and adjusted automatically; the latter uses second-order derivative information to improve convergence, but without the need to evaluate or store the full Hessian matrix, by using a fast approximate method to calculate Hessian-vector products. This combination of methods allows for the training of complicated networks that are difficult to optimize using standard backpropagation techniques. SKYNET employs convergence criteria that naturally prevent overfitting, and also includes a fast algorithm for estimating the accuracy of network outputs. The utility and flexibility of SKYNET are demonstrated by application to a number of toy problems, and to astronomical problems focusing on the recovery of structure from blurred and noisy images, the identification of gamma-ray bursters, and the compression and denoising of galaxy images. The SKYNET software, which is implemented in standard ANSI C and fully parallelized using MPI, is available at http://www.mrao.cam.ac.uk/software/skynet/.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Taher, Mohammad Ali; Pourmohammad, Fatemeh; Fazelirad, Hamid
2015-12-01
In the present work, an electrothermal atomic absorption spectrometric method has been developed for the determination of ultra-trace amounts of rhodium after adsorption of its 2-(5-bromo-2-pyridylazo)-5-diethylaminophenol/tetraphenylborate ion associated complex at the surface of alumina. Several factors affecting the extraction efficiency such as the pH, type of eluent, sample and eluent flow rates, sorption capacity of alumina and sample volume were investigated and optimized. The relative standard deviation for eight measurements of 0.1 ng/mL of rhodium was ±6.3%. In this method, the detection limit was 0.003 ng/mL in the original solution. The sorption capacity of alumina and the linear range for Rh(III) were evaluated as 0.8 mg/g and 0.015-0.45 ng/mL in the original solution, respectively. The proposed method was successfully applied for the extraction and determination of rhodium content in some food and standard samples with high recovery values. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2012-01-01
Background Multiplex cytometric bead assay (CBA) have a number of advantages over ELISA for antibody testing, but little information is available on standardization and validation of antibody CBA to multiple Plasmodium falciparum antigens. The present study was set to determine optimal parameters for multiplex testing of antibodies to P. falciparum antigens, and to compare results of multiplex CBA to ELISA. Methods Antibodies to ten recombinant P. falciparum antigens were measured by CBA and ELISA in samples from 30 individuals from a malaria endemic area of Kenya and compared to known positive and negative control plasma samples. Optimal antigen amounts, monoplex vs multiplex testing, plasma dilution, optimal buffer, number of beads required were assessed for CBA testing, and results from CBA vs. ELISA testing were compared. Results Optimal amounts for CBA antibody testing differed according to antigen. Results for monoplex CBA testing correlated strongly with multiplex testing for all antigens (r = 0.88-0.99, P values from <0.0001 - 0.004), and antibodies to variants of the same antigen were accurately distinguished within a multiplex reaction. Plasma dilutions of 1:100 or 1:200 were optimal for all antigens for CBA testing. Plasma diluted in a buffer containing 0.05% sodium azide, 0.5% polyvinylalcohol, and 0.8% polyvinylpyrrolidone had the lowest background activity. CBA median fluorescence intensity (MFI) values with 1,000 antigen-conjugated beads/well did not differ significantly from MFI with 5,000 beads/well. CBA and ELISA results correlated well for all antigens except apical membrane antigen-1 (AMA-1). CBA testing produced a greater range of values in samples from malaria endemic areas and less background reactivity for blank samples than ELISA. Conclusion With optimization, CBA may be the preferred method of testing for antibodies to P. falciparum antigens, as CBA can test for antibodies to multiple recombinant antigens from a single plasma sample and produces a greater range of values in positive samples and lower background readings for blank samples than ELISA. PMID:23259607