Sample records for optimal order error

  1. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  2. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  3. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  4. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  5. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  6. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  7. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  8. Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-11-01

    In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.

  9. Image Halftoning Using Optimized Dot Diffusion

    DTIC Science & Technology

    1998-01-01

    ppvnath@sys.caltech.edu ABSTRACT The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion ...digital halftoning : ordered dither [1], error diffusion [2], neural-net based methods [8], and more recently direct binary search (DBS) [7]. Ordered...from periodic patterns. On the other hand error diffused halftones do not suffer from periodicity and offer blue noise characteristic [3] which is

  10. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  11. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Image Halftoning and Inverse Halftoning for Optimized Dot Diffusion

    DTIC Science & Technology

    1998-01-01

    systems.caltech.edu, ppvnath@sys.caltech.edu ABSTRACT The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error ... halftoning : ordered dither [3], error diffusion [4], neural-net based methods [2], and more recently direct binary search (DBS) [10]. Ordered dithering is a...patterns. On the other hand error diffused halftones do not suffer from periodicity and offer blue noise characteristic [11] which is found to be

  13. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  14. Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2009-10-01

    Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.

  15. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  16. Computer programming: quality and safety for neonatal parenteral nutrition orders.

    PubMed

    Huston, Robert K; Markell, Andrea M; McCulley, Elizabeth A; Marcus, Matthew J; Cohen, Howard S

    2013-08-01

    Computerized software programs reduce errors and increase consistency when ordering parenteral nutrition (PN). The purpose of this study was to evaluate the effectiveness of our computerized neonatal PN calculator ordering program in reducing errors and optimizing nutrient intake. This was a retrospective study of infants requiring PN during the first 2-3 weeks of life. Caloric, protein, calcium, and phosphorus intakes; days above and below amino acid (AA) goals; and PN ordering errors were recorded. Infants were divided into 3 groups by birth weight for analysis: ≤1000 g, 1001-1500 g, and >1500 g. Intakes and outcomes of infants before (2007) vs after (2009) implementation of the calculator for each group were compared. There were no differences in caloric, protein, or phosphorus intakes in 2007 vs 2009 in any group. Mean protein intakes were 97%-99% of goal for ≤1000-g and 1001- to 1500-g infants in 2009 vs 87% of goal for each group in 2007. In 2007, 7.6 per 100 orders were above and 11.5 per 100 were below recommended AA intakes. Calcium intakes were higher in 2009 vs 2007 in ≤1000-g (46.6 ± 6.1 vs 39.5 ± 8.0 mg/kg/d, P < .001) and >1500-g infants (50.6 ± 7.4 vs 39.9 ± 8.3 mg/kg/d, P < .001). Ordering errors were reduced from 4.6 per 100 in 2007 to 0.1 per 100 in 2009. Our study reaffirms that computerized ordering systems can increase the quality and safety of neonatal PN orders. Calcium and AA intakes were optimized and ordering errors were minimized using the computer-based ordering program.

  17. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  18. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  19. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  20. Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions

    DOE PAGES

    Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...

    2016-02-24

    Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less

  1. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  2. Design of minimum multiplier fractional order differentiator based on lattice wave digital filter.

    PubMed

    Barsainya, Richa; Rawat, Tarun Kumar; Kumar, Manjeet

    2017-01-01

    In this paper, a novel design of fractional order differentiator (FOD) based on lattice wave digital filter (LWDF) is proposed which requires minimum number of multiplier for its structural realization. Firstly, the FOD design problem is formulated as an optimization problem using the transfer function of lattice wave digital filter. Then, three optimization algorithms, namely, genetic algorithm (GA), particle swarm optimization (PSO) and cuckoo search algorithm (CSA) are applied to determine the optimal LWDF coefficients. The realization of FOD using LWD structure increases the design accuracy, as only N number of coefficients are to be optimized for Nth order FOD. Finally, two design examples of 3rd and 5th order lattice wave digital fractional order differentiator (LWDFOD) are demonstrated to justify the design accuracy. The performance analysis of the proposed design is carried out based on magnitude response, absolute magnitude error (dB), root mean square (RMS) magnitude error, arithmetic complexity, convergence profile and computation time. Simulation results are attained to show the comparison of the proposed LWDFOD with the published works and it is observed that an improvement of 29% is obtained in the proposed design. The proposed LWDFOD approximates the ideal FOD and surpasses the existing ones reasonably well in mid and high frequency range, thereby making the proposed LWDFOD a promising technique for the design of digital FODs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. On higher order discrete phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Gill, G. S.; Gupta, S. C.

    1972-01-01

    An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.

  4. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    NASA Astrophysics Data System (ADS)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  5. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  6. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  7. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  8. Estimating linear-nonlinear models using Rényi divergences

    PubMed Central

    Kouh, Minjoon; Sharpee, Tatyana O.

    2009-01-01

    This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981

  9. Estimating linear-nonlinear models using Renyi divergences.

    PubMed

    Kouh, Minjoon; Sharpee, Tatyana O

    2009-01-01

    This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.

  10. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  11. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.

    2017-01-01

    During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.

  12. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  13. Design of an rf quadrupole for Landau damping

    NASA Astrophysics Data System (ADS)

    Papke, K.; Grudiev, A.

    2017-08-01

    The recently proposed superconducting quadrupole resonator for Landau damping in accelerators is subjected to a detailed design study. The optimization process of two different cavity types is presented following the requirements of the High Luminosity Large Hadron Collider (HL-LHC) with the main focus on quadrupolar strength, surface peak fields, and impedance. The lower order and higher order mode (LOM and HOM) spectrum of the optimized cavities is investigated and different approaches for their damping are proposed. On the basis of an example the first two higher order multipole errors are calculated. Likewise on this example the required rf power and optimal external quality factor for the input coupler is derived.

  14. Simultaneous correction of large low-order and high-order aberrations with a new deformable mirror technology

    NASA Astrophysics Data System (ADS)

    Rooms, F.; Camet, S.; Curis, J. F.

    2010-02-01

    A new technology of deformable mirror will be presented. Based on magnetic actuators, these deformable mirrors feature record strokes (more than +/- 45μm of astigmatism and focus correction) with an optimized temporal behavior. Furthermore, the development has been made in order to have a large density of actuators within a small clear aperture (typically 52 actuators within a diameter of 9.0mm). We will present the key benefits of this technology for vision science: simultaneous correction of low and high order aberrations, AO-SLO image without artifacts due to the membrane vibration, optimized control, etc. Using recent papers published by Doble, Thibos and Miller, we show the performances that can be achieved by various configurations using statistical approach. The typical distribution of wavefront aberrations (both the low order aberration (LOA) and high order aberration (HOA)) have been computed and the correction applied by the mirror. We compare two configurations of deformable mirrors (52 and 97 actuators) and highlight the influence of the number of actuators on the fitting error, the photon noise error and the effective bandwidth of correction.

  15. A Control Model: Interpretation of Fitts' Law

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1984-01-01

    The analytical results for several models are given: a first order model where it is assumed that the hand velocity can be directly controlled, and a second order model where it is assumed that the hand acceleration can be directly controlled. Two different types of control-laws are investigated. One is linear function of the hand error and error rate; the other is the time-optimal control law. Results show that the first and second order models with the linear control-law produce a movement time (MT) function with the exact form of the Fitts' Law. The control-law interpretation implies that the effect of target width on MT must be a result of the vertical motion which elevates the hand from the starting point and drops it on the target at the target edge. The time optimal control law did not produce a movement-time formula simular to Fitt's Law.

  16. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  17. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  18. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  19. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  20. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  1. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  2. Optimization of the bank's operating portfolio

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.; Medvedev, M. A.

    2016-06-01

    The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.

  3. Second-order shaped pulsed for solid-state quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Pinaki

    2008-01-01

    We present the construction and detailed analysis of highly optimized self-refocusing pulse shapes for several rotation angles. We characterize the constructed pulses by the coefficients appearing in the Magnus expansion up to second order. This allows a semianalytical analysis of the performance of the constructed shapes in sequences and composite pulses by computing the corresponding leading-order error operators. Higher orders can be analyzed with the numerical technique suggested by us previously. We illustrate the technique by analyzing several composite pulses designed to protect against pulse amplitude errors, and on decoupling sequences for potentially long chains of qubits with on-site andmore » nearest-neighbor couplings.« less

  4. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound.

    PubMed

    Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai

    2011-01-01

    In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.

  5. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  6. Error field optimization in DIII-D using extremum seeking control

    NASA Astrophysics Data System (ADS)

    Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.

    2016-07-01

    DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n  =  1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.

  7. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  8. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  9. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  10. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  11. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  12. Quaternion error-based optimal control applied to pinpoint landing

    NASA Astrophysics Data System (ADS)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  13. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  14. Data entry errors and design for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.

  15. Fringe-period selection for a multifrequency fringe-projection phase unwrapping method

    NASA Astrophysics Data System (ADS)

    Zhang, Chunwei; Zhao, Hong; Jiang, Kejian

    2016-08-01

    The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.

  16. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  17. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  18. Optimal digital dynamical decoupling for general decoherence via Walsh modulation

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza

    2017-11-01

    We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.

  19. Maximizing the probability of satisfying the clinical goals in radiation therapy treatment planning under setup uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders

    2015-07-15

    Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less

  20. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  1. Advanced Interactive Display Formats for Terminal Area Traffic Control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Shaviv, G. E.

    1999-01-01

    This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.

  2. Optimization of a sensor cluster for determination of trajectories and velocities of supersonic objects

    NASA Astrophysics Data System (ADS)

    Cannella, Marco; Sciuto, Salvatore Andrea

    2001-04-01

    An evaluation of errors for a method for determination of trajectories and velocities of supersonic objects is conducted. The analytical study of a cluster, composed of three pressure transducers and generally used as an apparatus for cinematic determination of parameters of supersonic objects, is developed. Furthermore, detailed investigation into the accuracy of this cluster on determination of the slope of an incoming shock wave is carried out for optimization of the device. In particular, a specific non-dimensional parameter is proposed in order to evaluate accuracies for various values of parameters and reference graphs are provided in order to properly design the sensor cluster. Finally, on the basis of the error analysis conducted, a discussion on the best estimation of the relative distance for the sensor as a function of temporal resolution of the measuring system is presented.

  3. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  4. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  5. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  6. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  7. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  8. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  10. Errors in preparation and administration of parenteral drugs in neonatology: evaluation and corrective actions.

    PubMed

    Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra

    2016-12-01

    The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.

  11. Neighboring extremal optimal control design including model mismatch errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T.J.; Hull, D.G.

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  12. Study of the fractional order proportional integral controller for the permanent magnet synchronous motor based on the differential evolution algorithm.

    PubMed

    Zheng, Weijia; Pi, Youguo

    2016-07-01

    A tuning method of the fractional order proportional integral speed controller for a permanent magnet synchronous motor is proposed in this paper. Taking the combination of the integral of time and absolute error and the phase margin as the optimization index, the robustness specification as the constraint condition, the differential evolution algorithm is applied to search the optimal controller parameters. The dynamic response performance and robustness of the obtained optimal controller are verified by motor speed-tracking experiments on the motor speed control platform. Experimental results show that the proposed tuning method can enable the obtained control system to achieve both the optimal dynamic response performance and the robustness to gain variations. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Optimal Mixtures of Test Types in Paired-Associate Learning (Sensory Information Processing). Final Report.

    ERIC Educational Resources Information Center

    Wolford, George

    Seven experiments were run to determine the precise nature of some of the variables which affect the processing of short-term visual information. In particular, retinal location, report order, processing order, lateral masking, and redundancy were studied along with the nature of the confusion errors which are made in the full report procedure.…

  14. Increasing the statistical significance of entanglement detection in experiments.

    PubMed

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  15. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  16. Economic optimization of operations for hybrid energy systems under variable markets

    DOE PAGES

    Chen, Jen; Garcia, Humberto E.

    2016-05-21

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  17. Economic optimization of operations for hybrid energy systems under variable markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jen; Garcia, Humberto E.

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  18. Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations

    DTIC Science & Technology

    2015-06-01

    using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central

  19. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  20. Quantum Error Correction for Minor Embedded Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Paz Silva, Gerardo; Mishra, Anurag; Albash, Tameem; Lidar, Daniel

    2015-03-01

    While quantum annealing can take advantage of the intrinsic robustness of adiabatic dynamics, some form of quantum error correction (QEC) is necessary in order to preserve its advantages over classical computation. Moreover, realistic quantum annealers are subject to a restricted connectivity between qubits. Minor embedding techniques use several physical qubits to represent a single logical qubit with a larger set of interactions, but necessarily introduce new types of errors (whenever the physical qubits corresponding to the same logical qubit disagree). We present a QEC scheme where a minor embedding is used to generate a 8 × 8 × 2 cubic connectivity out of the native one and perform experiments on a D-Wave quantum annealer. Using a combination of optimized encoding and decoding techniques, our scheme enables the D-Wave device to solve minor embedded hard instances at least as well as it would on a native implementation. Our work is a proof-of-concept that minor embedding can be advantageously implemented in order to increase both the robustness and the connectivity of a programmable quantum annealer. Applied in conjunction with decoding techniques, this paves the way toward scalable quantum annealing with applications to hard optimization problems.

  1. Patterning control strategies for minimum edge placement error in logic devices

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  2. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  3. First-Order System Least-Squares for the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.

  4. Discrimination of binary coherent states using a homodyne detector and a photon number resolving detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittmann, Christoffer; Sych, Denis; Leuchs, Gerd

    2010-06-15

    We investigate quantum measurement strategies capable of discriminating two coherent states probabilistically with significantly smaller error probabilities than can be obtained using nonprobabilistic state discrimination. We apply a postselection strategy to the measurement data of a homodyne detector as well as a photon number resolving detector in order to lower the error probability. We compare the two different receivers with an optimal intermediate measurement scheme where the error rate is minimized for a fixed rate of inconclusive results. The photon number resolving (PNR) receiver is experimentally demonstrated and compared to an experimental realization of a homodyne receiver with postselection. Inmore » the comparison, it becomes clear that the performance of the PNR receiver surpasses the performance of the homodyne receiver, which we prove to be optimal within any Gaussian operations and conditional dynamics.« less

  5. Optimal threshold of error decision related to non-uniform phase distribution QAM signals generated from MZM based on OCS

    NASA Astrophysics Data System (ADS)

    Han, Xifeng; Zhou, Wen

    2018-03-01

    Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.

  6. Design and simulation of a 800 Mbit/s data link for magnetic resonance imaging wearables.

    PubMed

    Vogt, Christian; Buthe, Lars; Petti, Luisa; Cantarella, Giuseppe; Munzenrieder, Niko; Daus, Alwin; Troster, Gerhard

    2015-08-01

    This paper presents the optimization of electronic circuitry for operation in the harsh electro magnetic (EM) environment during a magnetic resonance imaging (MRI) scan. As demonstrator, a device small enough to be worn during the scan is optimized. Based on finite element method (FEM) simulations, the induced current densities due to magnetic field changes of 200 T s(-1) were reduced from 1 × 10(10) A m(-2) by one order of magnitude, predicting error-free operation of the 1.8V logic employed. The simulations were validated using a bit error rate test, which showed no bit errors during a MRI scan sequence. Therefore, neither the logic, nor the utilized 800 Mbit s(-1) low voltage differential swing (LVDS) data link of the optimized wearable device were significantly influenced by the EM interference. Next, the influence of ferro-magnetic components on the static magnetic field and consequently the image quality was simulated showing a MRI image loss with approximately 2 cm radius around a commercial integrated circuit of 1×1 cm(2). This was successively validated by a conventional MRI scan.

  7. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  8. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  9. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  10. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  11. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  12. Mammalian cell culture monitoring using in situ spectroscopy: Is your method really optimised?

    PubMed

    André, Silvère; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic

    2017-03-01

    In recent years, as a result of the process analytical technology initiative of the US Food and Drug Administration, many different works have been carried out on direct and in situ monitoring of critical parameters for mammalian cell cultures by Raman spectroscopy and multivariate regression techniques. However, despite interesting results, it cannot be said that the proposed monitoring strategies, which will reduce errors of the regression models and thus confidence limits of the predictions, are really optimized. Hence, the aim of this article is to optimize some critical steps of spectroscopic acquisition and data treatment in order to reach a higher level of accuracy and robustness of bioprocess monitoring. In this way, we propose first an original strategy to assess the most suited Raman acquisition time for the processes involved. In a second part, we demonstrate the importance of the interbatch variability on the accuracy of the predictive models with a particular focus on the optical probes adjustment. Finally, we propose a methodology for the optimization of the spectral variables selection in order to decrease prediction errors of multivariate regressions. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:308-316, 2017. © 2017 American Institute of Chemical Engineers.

  13. Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.

    1992-01-01

    An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.

  14. Numerical Solution of Optimal Control Problem under SPDE Constraints

    DTIC Science & Technology

    2011-10-14

    Faure and Sobol sequences are used to evaluate high dimensional integrals, and the errors in the numerical results for over 30 dimensions become quite...sequence; right: 1000 points of dimension 26 and 27 projection for optimal Kronecker sequence. benchmark Faure and Sobol methods. 2.2 High order...J. Goodman and J. O’Rourke, Handbook of discrete and computational geome- try, CRC Press, Inc., (2004). [5] S. Joe and F. Kuo, Constructing Sobol

  15. Local alignment of two-base encoded DNA sequence

    PubMed Central

    Homer, Nils; Merriman, Barry; Nelson, Stanley F

    2009-01-01

    Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732

  16. Carbon dioxide emission prediction using support vector machine

    NASA Astrophysics Data System (ADS)

    Saleh, Chairul; Rachman Dzakiyullah, Nur; Bayu Nugroho, Jonathan

    2016-02-01

    In this paper, the SVM model was proposed for predict expenditure of carbon (CO2) emission. The energy consumption such as electrical energy and burning coal is input variable that affect directly increasing of CO2 emissions were conducted to built the model. Our objective is to monitor the CO2 emission based on the electrical energy and burning coal used from the production process. The data electrical energy and burning coal used were obtained from Alcohol Industry in order to training and testing the models. It divided by cross-validation technique into 90% of training data and 10% of testing data. To find the optimal parameters of SVM model was used the trial and error approach on the experiment by adjusting C parameters and Epsilon. The result shows that the SVM model has an optimal parameter on C parameters 0.1 and 0 Epsilon. To measure the error of the model by using Root Mean Square Error (RMSE) with error value as 0.004. The smallest error of the model represents more accurately prediction. As a practice, this paper was contributing for an executive manager in making the effective decision for the business operation were monitoring expenditure of CO2 emission.

  17. Optimal design of FIR triplet halfband filter bank and application in image coding.

    PubMed

    Kha, H H; Tuan, H D; Nguyen, T Q

    2011-02-01

    This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.

  18. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  19. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  20. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  1. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  2. Efficient error correction for next-generation sequencing of viral amplicons.

    PubMed

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  3. Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation

    NASA Astrophysics Data System (ADS)

    Skrzypacz, Piotr

    2017-09-01

    The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.

  4. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  5. Multigrid solvers and multigrid preconditioners for the solution of variational data assimilation problems

    NASA Astrophysics Data System (ADS)

    Debreu, Laurent; Neveu, Emilie; Simon, Ehouarn; Le Dimet, Francois Xavier; Vidard, Arthur

    2014-05-01

    In order to lower the computational cost of the variational data assimilation process, we investigate the use of multigrid methods to solve the associated optimal control system. On a linear advection equation, we study the impact of the regularization term on the optimal control and the impact of discretization errors on the efficiency of the coarse grid correction step. We show that even if the optimal control problem leads to the solution of an elliptic system, numerical errors introduced by the discretization can alter the success of the multigrid methods. The view of the multigrid iteration as a preconditioner for a Krylov optimization method leads to a more robust algorithm. A scale dependent weighting of the multigrid preconditioner and the usual background error covariance matrix based preconditioner is proposed and brings significant improvements. [1] Laurent Debreu, Emilie Neveu, Ehouarn Simon, François-Xavier Le Dimet and Arthur Vidard, 2014: Multigrid solvers and multigrid preconditioners for the solution of variational data assimilation problems, submitted to QJRMS, http://hal.inria.fr/hal-00874643 [2] Emilie Neveu, Laurent Debreu and François-Xavier Le Dimet, 2011: Multigrid methods and data assimilation - Convergence study and first experiments on non-linear equations, ARIMA, 14, 63-80, http://intranet.inria.fr/international/arima/014/014005.html

  6. Case study on impact performance optimization of hydraulic breakers.

    PubMed

    Noh, Dae-Kyung; Kang, Young-Ky; Cho, Jae-Sang; Jang, Joo-Sup

    2016-01-01

    In order to expand the range of activities of an excavator, attachments, such as hydraulic breakers have been developed to be applied to buckets. However, it is very difficult to predict the dynamic behavior of hydraulic impact devices such as breakers because of high non-linearity. Thus, the purpose of this study is to optimize the impact performance of hydraulic breakers. The ultimate goal of the optimization is to increase the impact energy and impact frequency and to reduce the pressure pulsation of the supply and return lines. The optimization results indicated that the four parameters used to optimize the impact performance of the breaker showed considerable improvement over the results reported in the literature. A test was also conducted and the results were compared with those obtained through optimization in order to verify the optimization results. The comparison showed an average relative error of 8.24 %, which seems to be in good agreement. The results of this study can be used to optimize the impact performance of hydraulic impact devices such as breakers, thus facilitating its application to excavators and increasing the range of activities of an excavator.

  7. Ordering policy for stock-dependent demand rate under progressive payment scheme: a comment

    NASA Astrophysics Data System (ADS)

    Glock, Christoph H.; Ries, Jörg M.; Schwindl, Kurt

    2015-04-01

    In a recent paper, Soni and Shah developed a model for finding the optimal ordering policy for a retailer facing stock-dependent demand and a supplier offering a progressive payment scheme. In this comment, we correct several errors in the formulation of the models of Soni and Shah and modify some assumptions to increase the model's applicability. Numerical examples illustrate the benefits of our modifications.

  8. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  9. Pharmacokinetic design optimization in children and estimation of maturation parameters: example of cytochrome P450 3A4.

    PubMed

    Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel

    2011-02-01

    The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.

  10. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  11. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  12. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  13. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  14. Full-Duplex Bidirectional Secure Communications Under Perfect and Distributionally Ambiguous Eavesdropper's CSI

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Zhang, Ying; Lin, Jingran; Wu, Sissi Xiaoxiao

    2017-09-01

    Consider a full-duplex (FD) bidirectional secure communication system, where two communication nodes, named Alice and Bob, simultaneously transmit and receive confidential information from each other, and an eavesdropper, named Eve, overhears the transmissions. Our goal is to maximize the sum secrecy rate (SSR) of the bidirectional transmissions by optimizing the transmit covariance matrices at Alice and Bob. To tackle this SSR maximization (SSRM) problem, we develop an alternating difference-of-concave (ADC) programming approach to alternately optimize the transmit covariance matrices at Alice and Bob. We show that the ADC iteration has a semi-closed-form beamforming solution, and is guaranteed to converge to a stationary solution of the SSRM problem. Besides the SSRM design, this paper also deals with a robust SSRM transmit design under a moment-based random channel state information (CSI) model, where only some roughly estimated first and second-order statistics of Eve's CSI are available, but the exact distribution or other high-order statistics is not known. This moment-based error model is new and different from the widely used bounded-sphere error model and the Gaussian random error model. Under the consider CSI error model, the robust SSRM is formulated as an outage probability-constrained SSRM problem. By leveraging the Lagrangian duality theory and DC programming, a tractable safe solution to the robust SSRM problem is derived. The effectiveness and the robustness of the proposed designs are demonstrated through simulations.

  15. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  16. Error field optimization in DIII-D using extremum seeking control

    DOE PAGES

    Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...

    2016-06-03

    A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less

  17. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  18. The importance of matched poloidal spectra to error field correction in DIII-D

    DOE PAGES

    Paz-Soldan, Carlos; Lanctot, Matthew J.; Logan, Nikolas C.; ...

    2014-07-09

    Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable "dominant" mode of the plasma is nulled at each toroidal mode number ( n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios the plasma is found to be respectively 7x and 20x less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated,more » any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Furthermore, despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.« less

  19. Radiometric calibration of hyper-spectral imaging spectrometer based on optimizing multi-spectral band selection

    NASA Astrophysics Data System (ADS)

    Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng

    2017-11-01

    Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.

  20. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  1. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  2. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  3. An optimized BP neural network based on genetic algorithm for static decoupling of a six-axis force/torque sensor

    NASA Astrophysics Data System (ADS)

    Fu, Liyue; Song, Aiguo

    2018-02-01

    In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.

  4. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  5. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  6. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  7. Artificial Intelligence Based Optimization for the Se(IV) Removal from Aqueous Solution by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron Composites

    PubMed Central

    Cao, Rensheng; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui

    2018-01-01

    Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms. PMID:29543753

  8. Artificial Intelligence Based Optimization for the Se(IV) Removal from Aqueous Solution by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron Composites.

    PubMed

    Cao, Rensheng; Fan, Mingyi; Hu, Jiwei; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui

    2018-03-15

    Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms.

  9. Approximate solution of space and time fractional higher order phase field equation

    NASA Astrophysics Data System (ADS)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  10. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  11. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations: 1. Forward model, error analysis, and information content

    NASA Astrophysics Data System (ADS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  12. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content.

    PubMed

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-27

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness ( τ ), effective radius ( r eff ), and cloud-top height ( h ). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  13. Path planning and parameter optimization of uniform removal in active feed polishing

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Wang, Shaozhi; Zhang, Chunlei; Zhang, Linghua; Chen, Huanan

    2015-06-01

    A high-quality ultrasmooth surface is demanded in short-wave optical systems. However, the existing polishing methods have difficulties meeting the requirement on spherical or aspheric surfaces. As a new kind of small tool polishing method, active feed polishing (AFP) could attain a surface roughness of less than 0.3 nm (RMS) on spherical elements, although AFP may magnify the residual figure error or mid-frequency error. The purpose of this work is to propose an effective algorithm to realize uniform removal of the surface in the processing. At first, the principle of the AFP and the mechanism of the polishing machine are introduced. In order to maintain the processed figure error, a variable pitch spiral path planning algorithm and the dwell time-solving model are proposed. For suppressing the possible mid-frequency error, the uniformity of the synthesis tool path, which is generated by an arbitrary point at the polishing tool bottom, is analyzed and evaluated, and the angular velocity ratio of the tool spinning motion to the revolution motion is optimized. Finally, an experiment is conducted on a convex spherical surface and an ultrasmooth surface is finally acquired. In conclusion, a high-quality ultrasmooth surface can be successfully obtained with little degradation of the figure and mid-frequency errors by the algorithm.

  14. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  15. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  16. Regret and the rationality of choices.

    PubMed

    Bourgeois-Gironde, Sacha

    2010-01-27

    Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making.

  17. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon.

    PubMed

    Ghaedi, M; Ansari, A; Bahari, F; Ghaedi, A M; Vafaei, A

    2015-02-25

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R(2)) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Ghaedi, M.; Ansari, A.; Bahari, F.; Ghaedi, A. M.; Vafaei, A.

    2015-02-01

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R2) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively.

  19. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  20. Identifying presence of correlated errors in GRACE monthly harmonic coefficients using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.

    2017-04-01

    A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.

  1. Chemometric optimization of the robustness of the near infrared spectroscopic method in wheat quality control.

    PubMed

    Pojić, Milica; Rakić, Dušan; Lazić, Zivorad

    2015-01-01

    A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  3. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  4. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  5. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  6. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  7. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  8. Research on accuracy analysis of laser transmission system based on Zemax and Matlab

    NASA Astrophysics Data System (ADS)

    Chen, Haiping; Liu, Changchun; Ye, Haixian; Xiong, Zhao; Cao, Tingfen

    2017-05-01

    Laser transmission system is important in high power solid-state laser facilities and its function is to transfer and focus the light beam in accordance with the physical function of the facility. This system is mainly composed of transmission mirror modules and wedge lens module. In order to realize the precision alignment of the system, the precision alignment of the system is required to be decomposed into the allowable range of the calibration error of each module. The traditional method is to analyze the error factors of the modules separately, and then the linear synthesis is carried out, and the influence of the multi-module and multi-factor is obtained. In order to analyze the effect of the alignment error of each module on the beam center and focus more accurately, this paper aims to combine with the Monte Carlo random test and ray tracing, analyze influence of multi-module and multi-factor on the center of the beam, and evaluate and optimize the results of accuracy decomposition.

  9. Extending high-order flux operators on spherical icosahedral grids and their application in a Shallow Water Model for transporting the Potential Vorticity

    NASA Astrophysics Data System (ADS)

    Zhang, Y.

    2017-12-01

    The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.

  10. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

  11. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics.

    PubMed

    Zou, Weiyao; Burns, Stephen A

    2012-03-20

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America

  12. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  13. Co-optimization of lithographic and patterning processes for improved EPE performance

    NASA Astrophysics Data System (ADS)

    Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane

    2017-03-01

    Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.

  14. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  15. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  16. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content

    PubMed Central

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2018-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470

  17. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    NASA Technical Reports Server (NTRS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  18. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    NASA Technical Reports Server (NTRS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  19. Implementation of an integrated computerized prescriber order-entry system for chemotherapy in a multisite safety-net health system.

    PubMed

    Chung, Clement; Patel, Shital; Lee, Rosetta; Fu, Lily; Reilly, Sean; Ho, Tuyet; Lionetti, Jason; George, Michael D; Taylor, Pam

    2018-03-15

    The development of a computerized prescriber order-entry (CPOE) system for chemotherapy in a multisite safety-net health system and the challenges to its successful implementation are described. Before CPOE for chemotherapy was first implemented and embedded in the electronic medical record system of Harris Health System (HHS), pharmacy personnel relied on regimen-specific preprinted order sets. However, due to differences in practice styles and workflow logistics, the paper orders across the 3 facilities were mostly site specific, with varying clinical content. Many of these order sets had not been approved by the oncology subcommittee. In addition, disparities in clinical knowledge and lack of communication contributed to inconsistencies in order set development. Led by medical directors from medical oncology departments at the 3 facilities, pharmacy administrators, and information technology representatives, HHS committed resources to supporting the adoption and use of a CPOE system for chemotherapy. Five practical lessons of broad applicability have been learned: engagement of interprofessional stakeholders, optimization of workflow before CPOE implementation, requirement of verification tool for CPOE, consolidation of protocols, and commitment to ongoing training and support. Evaluation of the CPOE system demonstrated a systemwide reduction in medication errors by 75% ( p < 0.05). Satisfaction with the CPOE system varied among sites and was unchanged institutionwide 6 months after the CPOE implementation. The development and implementation of CPOE for chemotherapy at a multisite safety-net health system created opportunities to optimize patient care and reduce variations through interprofessional collaborations. Initial evaluation suggested that CPOE reduced the medication-order error rate and improved user satisfaction in 1 of 3 facilities. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  20. Microgrid optimal scheduling considering impact of high penetration wind generation

    NASA Astrophysics Data System (ADS)

    Alanazi, Abdulaziz

    The objective of this thesis is to study the impact of high penetration wind energy in economic and reliable operation of microgrids. Wind power is variable, i.e., constantly changing, and nondispatchable, i.e., cannot be controlled by the microgrid controller. Thus an accurate forecasting of wind power is an essential task in order to study its impacts in microgrid operation. Two commonly used forecasting methods including Autoregressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) have been used in this thesis to improve the wind power forecasting. The forecasting error is calculated using a Mean Absolute Percentage Error (MAPE) and is improved using the ANN. The wind forecast is further used in the microgrid optimal scheduling problem. The microgrid optimal scheduling is performed by developing a viable model for security-constrained unit commitment (SCUC) based on mixed-integer linear programing (MILP) method. The proposed SCUC is solved for various wind penetration levels and the relationship between the total cost and the wind power penetration is found. In order to reduce microgrid power transfer fluctuations, an additional constraint is proposed and added to the SCUC formulation. The new constraint would control the time-based fluctuations. The impact of the constraint on microgrid SCUC results is tested and validated with numerical analysis. Finally, the applicability of proposed models is demonstrated through numerical simulations.

  1. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  2. Parametric optimization in virtual prototyping environment of the control device for a robotic system used in thin layers deposition

    NASA Astrophysics Data System (ADS)

    Enescu (Balaş, M. L.; Alexandru, C.

    2016-08-01

    The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.

  3. Multichannel-Sensing Scheduling and Transmission-Energy Optimizing in Cognitive Radio Networks with Energy Harvesting.

    PubMed

    Hoan, Tran-Nhut-Khai; Hiep, Vu-Van; Koo, In-Soo

    2016-03-31

    This paper considers cognitive radio networks (CRNs) utilizing multiple time-slotted primary channels in which cognitive users (CUs) are powered by energy harvesters. The CUs are under the consideration that hardware constraints on radio devices only allow them to sense and transmit on one channel at a time. For a scenario where the arrival of harvested energy packets and the battery capacity are finite, we propose a scheme to optimize (i) the channel-sensing schedule (consisting of finding the optimal action (silent or active) and sensing order of channels) and (ii) the optimal transmission energy set corresponding to the channels in the sensing order for the operation of the CU in order to maximize the expected throughput of the CRN over multiple time slots. Frequency-switching delay, energy-switching cost, correlation in spectrum occupancy across time and frequency and errors in spectrum sensing are also considered in this work. The performance of the proposed scheme is evaluated via simulation. The simulation results show that the throughput of the proposed scheme is greatly improved, in comparison to related schemes in the literature. The collision ratio on the primary channels is also investigated.

  4. Optimal secondary source position in exterior spherical acoustical holophony

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.; Martin, V.

    2012-02-01

    Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.

  5. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  6. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  7. Optimization of sustained release aceclofenac microspheres using response surface methodology.

    PubMed

    Deshmukh, Rameshwar K; Naik, Jitendra B

    2015-03-01

    Polymeric microspheres containing aceclofenac were prepared by single emulsion (oil-in-water) solvent evaporation method using response surface methodology (RSM). Microspheres were prepared by changing formulation variables such as the amount of Eudragit® RS100 and the amount of polyvinyl alcohol (PVA) by statistical experimental design in order to enhance the encapsulation efficiency (E.E.) of the microspheres. The resultant microspheres were evaluated for their size, morphology, E.E., and in vitro drug release. The amount of Eudragit® RS100 and the amount of PVA were found to be significant factors respectively for determining the E.E. of the microspheres. A linear mathematical model equation fitted to the data was used to predict the E.E. in the optimal region. Optimized formulation of microspheres was prepared using optimal process variables setting in order to evaluate the optimization capability of the models generated according to IV-optimal design. The microspheres showed high E.E. (74.14±0.015% to 85.34±0.011%) and suitably sustained drug release (minimum; 40% to 60%; maximum) over a period of 12h. The optimized microspheres formulation showed E.E. of 84.87±0.005 with small error value (1.39). The low magnitudes of error and the significant value of R(2) in the present investigation prove the high prognostic ability of the design. The absence of interactions between drug and polymers was confirmed by Fourier transform infrared (FTIR) spectroscopy. Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) revealed the dispersion of drug within microspheres formulation. The microspheres were found to be discrete, spherical with smooth surface. The results demonstrate that these microspheres could be promising delivery system to sustain the drug release and improve the E.E. thus prolong drug action and achieve the highest healing effect with minimal gastrointestinal side effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Forecasting the Amount of Waste-Sewage Water Discharged into the Yangtze River Basin Based on the Optimal Fractional Order Grey Model

    PubMed Central

    Li, Shuliang; Meng, Wei; Xie, Yufeng

    2017-01-01

    With the rapid development of the Yangtze River economic belt, the amount of waste-sewage water discharged into the Yangtze River basin increases sharply year by year, which has impeded the sustainable development of the Yangtze River basin. The water security along the Yangtze River basin is very important for China, It is something about water security of roughly one-third of China’s population and the sustainable development of the 19 provinces, municipalities and autonomous regions among the Yangtze River basin. Therefore, a scientific prediction of the amount of waste-sewage water discharged into Yangtze River basin has a positive significance on sustainable development of industry belt along with Yangtze River basin. This paper builds the fractional DWSGM (1,1) (DWSGM (1,1) model is short for Discharge amount of Waste Sewage Grey Model for one order equation and one variable) model based on the fractional accumulating generation operator and fractional reducing operator, and calculates the optimal order of “r” by using particle swarm optimization (PSO) algorithm for solving the minimum average relative simulation error. Meanwhile, the simulation performance of DWSGM (1,1) model with the optimal fractional order is tested by comparing the simulation results of grey prediction models with different orders. Finally, the optimal fractional order DWSGM (1,1) grey model is applied to predict the amount of waste-sewage water discharged into the Yangtze River basin, and corresponding countermeasures and suggestions are put forward through analyzing and comparing the prediction results. This paper has positive significance on enriching the fractional order modeling method of the grey system. PMID:29295517

  9. Forecasting the Amount of Waste-Sewage Water Discharged into the Yangtze River Basin Based on the Optimal Fractional Order Grey Model.

    PubMed

    Li, Shuliang; Meng, Wei; Xie, Yufeng

    2017-12-23

    With the rapid development of the Yangtze River economic belt, the amount of waste-sewage water discharged into the Yangtze River basin increases sharply year by year, which has impeded the sustainable development of the Yangtze River basin. The water security along the Yangtze River basin is very important for China, It is something aboutwater security of roughly one-third of China's population and the sustainable development of the 19 provinces, municipalities and autonomous regions among the Yangtze River basin. Therefore, a scientific prediction of the amount of waste-sewage water discharged into Yangtze River basin has a positive significance on sustainable development of industry belt along with Yangtze River basin. This paper builds the fractional DWSGM(1,1)(DWSGM(1,1) model is short for Discharge amount of Waste Sewage Grey Model for one order equation and one variable) model based on the fractional accumulating generation operator and fractional reducing operator, and calculates the optimal order of "r" by using particle swarm optimization(PSO)algorithm for solving the minimum average relative simulation error. Meanwhile, the simulation performance of DWSGM(1,1)model with the optimal fractional order is tested by comparing the simulation results of grey prediction models with different orders. Finally, the optimal fractional order DWSGM(1,1)grey model is applied to predict the amount of waste-sewage water discharged into the Yangtze River basin, and corresponding countermeasures and suggestions are put forward through analyzing and comparing the prediction results. This paper has positive significance on enriching the fractional order modeling method of the grey system.

  10. Optimization of fixture layouts of glass laser optics using multiple kernel regression.

    PubMed

    Su, Jianhua; Cao, Enhua; Qiao, Hong

    2014-05-10

    We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.

  11. Density-based penalty parameter optimization on C-SVM.

    PubMed

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  12. Development of adaptive observation strategy using retrospective optimal interpolation

    NASA Astrophysics Data System (ADS)

    Noh, N.; Kim, S.; Song, H.; Lim, G.

    2011-12-01

    Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.

  13. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  14. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  15. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  16. Regret and the rationality of choices

    PubMed Central

    Bourgeois-Gironde, Sacha

    2010-01-01

    Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making. PMID:20026463

  17. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    NASA Astrophysics Data System (ADS)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  18. Serial robot for the trajectory optimization and error compensation of TMT mask exchange system

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Feifan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is the main part of Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). According to the conception of the TMT mask exchange system, the pre-design was introduced in the paper which was based on IRB 140 robot. The stiffness model of IRB 140 in SolidWorks was analyzed under different gravity vectors for further error compensation. In order to find the right location and path planning, the robot and the mask cassette model was imported into MOBIE model to perform different schemes simulation. And obtained the initial installation position and routing. Based on these initial parameters, IRB 140 robot was operated to simulate the path and estimate the mask exchange time. Meanwhile, MATLAB and ADAMS software were used to perform simulation analysis and optimize the route to acquire the kinematics parameters and compare with the experiment results. After simulation and experimental research mentioned in the paper, the theoretical reference was acquired which could high efficient improve the structure of the mask exchange system parameters optimization of the path and precision of the robot position.

  19. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  20. A trust-region algorithm for the optimization of PSA processes using reduced-order modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    The last few decades have seen a considerable increase in the applications of adsorptive gas separation technologies, such as pressure swing adsorption (PSA); the applications range from bulk separations to trace contaminant removal. PSA processes are based on solid-gas equilibrium and operate under periodic transient conditions [1]. Bed models for these processes are therefore defined by coupled nonlinear partial differential and algebraic equations (PDAEs) distributed in space and time with periodic boundary conditions that connect the processing steps together and high nonlinearities arising from non-isothermal effects and nonlinear adsorption isotherms. As a result, the optimization of such systems for eithermore » design or operation represents a significant computational challenge to current nonlinear programming algorithms. Model reduction is a powerful methodology that permits systematic generation of cost-efficient low-order representations of large-scale systems that result from discretization of such PDAEs. In particular, low-dimensional approximations can be obtained from reduced order modeling (ROM) techniques based on proper orthogonal decomposition (POD) and can be used as surrogate models in the optimization problems. In this approach, a representative ensemble of solutions of the dynamic PDAE system is constructed by solving a higher-order discretization of the model using the method of lines, followed by the application of Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes). These modes are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDAE system. This approach leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally efficient [2]. The ROM methodology has been successfully applied to a 2-bed 4-step PSA process used for separating a hydrogen-methane mixture in [3]. The reduced order model developed was successfully used to optimize this process to maximize hydrogen recovery within a trust-region. We extend this approach in this work to develop a rigorous trust-region algorithm for ROM-based optimization of PSA processes. The trust-region update rules and sufficient decrease condition for the objective is used to determine the size of the trust-region. Based on the decrease in the objective function and error in the ROM, a ROM updation strategy is designed [4, 5]. The inequalities and bounds are handled in the algorithm using exact penalty formulation, and a non-smooth trust-region algorithm by Conn et al. [6] is used to handle non-differentiability. To ensure that the first order consistency condition is met and the optimum obtained from ROM-based optimization corresponds to the optimum of the original problem, a scaling function, such as one proposed by Alexandrov et al. [7], is incorporated in the objective function. Such error control mechanism is also capable of handling numerical inconsistencies such as unphysical oscillations in the state variable profiles. The proposed methodology is applied to optimize a PSA process to concentrate CO{sub 2} from a nitrogen-carbon dioxide mixture. As in [3], separate ROMs are developed for each operating step with different POD modes for each state variable. Numerical results will be presented for optimization case studies which involve maximizing CO{sub 2} recovery, feed throughput or minimizing overall power consumption.« less

  1. Optimization of removal function in computer controlled optical surfacing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Guo, Peiji; Ren, Jianfeng

    2010-10-01

    The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.

  2. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  3. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  4. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  6. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  7. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  8. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  9. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  10. A Simplified GCS-DCSK Modulation and Its Performance Optimization

    NASA Astrophysics Data System (ADS)

    Xu, Weikai; Wang, Lin; Chi, Chong-Yung

    2016-12-01

    In this paper, a simplified Generalized Code-Shifted Differential Chaos Shift Keying (GCS-DCSK) whose transmitter never needs any delay circuits, is proposed. However, its performance is deteriorated because the orthogonality between substreams cannot be guaranteed. In order to optimize its performance, the system model of the proposed GCS-DCSK with power allocations on substreams is presented. An approximate bit error rate (BER) expression of the proposed model, which is a function of substreams’ power, is derived using Gaussian Approximation. Based on the BER expression, an optimal power allocation strategy between information substreams and reference substream is obtained. Simulation results show that the BER performance of the proposed GCS-DCSK with the optimal power allocation can be significantly improved when the number of substreams M is large.

  11. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  12. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    PubMed Central

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  13. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    PubMed

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  14. Optimal information transfer in enzymatic networks: A field theoretic formulation

    NASA Astrophysics Data System (ADS)

    Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.

    2017-07-01

    Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.

  15. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  16. Hybrid architecture for encoded measurement-based quantum computation

    PubMed Central

    Zwerger, M.; Briegel, H. J.; Dür, W.

    2014-01-01

    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906

  17. Strategy optimization for mask rule check in wafer fab

    NASA Astrophysics Data System (ADS)

    Yang, Chuen Huei; Lin, Shaina; Lin, Roger; Wang, Alice; Lee, Rachel; Deng, Erwin

    2015-07-01

    Photolithography process is getting more and more sophisticated for wafer production following Moore's law. Therefore, for wafer fab, consolidated and close cooperation with mask house is a key to achieve silicon wafer success. However, generally speaking, it is not easy to preserve such partnership because many engineering efforts and frequent communication are indispensable. The inattentive connection is obvious in mask rule check (MRC). Mask houses will do their own MRC at job deck stage, but the checking is only for identification of mask process limitation including writing, etching, inspection, metrology, etc. No further checking in terms of wafer process concerned mask data errors will be implemented after data files of whole mask are composed in mask house. There are still many potential data errors even post-OPC verification has been done for main circuits. What mentioned here are the kinds of errors which will only occur as main circuits combined with frame and dummy patterns to form whole reticle. Therefore, strategy optimization is on-going in UMC to evaluate MRC especially for wafer fab concerned errors. The prerequisite is that no impact on mask delivery cycle time even adding this extra checking. A full-mask checking based on job deck in gds or oasis format is necessary in order to secure acceptable run time. Form of the summarized error report generated by this checking is also crucial because user friendly interface will shorten engineers' judgment time to release mask for writing. This paper will survey the key factors of MRC in wafer fab.

  18. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  19. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  20. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  1. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  2. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  3. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  5. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  6. Optimal Control Modification for Robust Adaptation of Singularly Perturbed Systems with Slow Actuators

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham; Stepanyan, Vahram; Boskovic, Jovan

    2009-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. The model matching conditions in the transformed time coordinate results in increase in the feedback gain and modification of the adaptive law.

  7. Optimal Control Modification Adaptive Law for Time-Scale Separated Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.

  8. Optimal Control Modification for Time-Scale Separated Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.

  9. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  10. Optimized pulses for the control of uncertain qubits

    DOE PAGES

    Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...

    2012-05-18

    The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less

  11. Numerical Optimization Strategy for Determining 3D Flow Fields in Microfluidics

    NASA Astrophysics Data System (ADS)

    Eden, Alex; Sigurdson, Marin; Mezic, Igor; Meinhart, Carl

    2015-11-01

    We present a hybrid experimental-numerical method for generating 3D flow fields from 2D PIV experimental data. An optimization algorithm is applied to a theory-based simulation of an alternating current electrothermal (ACET) micromixer in conjunction with 2D PIV data to generate an improved representation of 3D steady state flow conditions. These results can be used to investigate mixing phenomena. Experimental conditions were simulated using COMSOL Multiphysics to solve the temperature and velocity fields, as well as the quasi-static electric fields. The governing equations were based on a theoretical model for ac electrothermal flows. A Nelder-Mead optimization algorithm was used to achieve a better fit by minimizing the error between 2D PIV experimental velocity data and numerical simulation results at the measurement plane. By applying this hybrid method, the normalized RMS velocity error between the simulation and experimental results was reduced by more than an order of magnitude. The optimization algorithm altered 3D fluid circulation patterns considerably, providing a more accurate representation of the 3D experimental flow field. This method can be generalized to a wide variety of flow problems. This research was supported by the Institute for Collaborative Biotechnologies through grant W911NF-09-0001 from the U.S. Army Research Office.

  12. Hedging Your Bets by Learning Reward Correlations in the Human Brain

    PubMed Central

    Wunderlich, Klaus; Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.

    2011-01-01

    Summary Human subjects are proficient at tracking the mean and variance of rewards and updating these via prediction errors. Here, we addressed whether humans can also learn about higher-order relationships between distinct environmental outcomes, a defining ecological feature of contexts where multiple sources of rewards are available. By manipulating the degree to which distinct outcomes are correlated, we show that subjects implemented an explicit model-based strategy to learn the associated outcome correlations and were adept in using that information to dynamically adjust their choices in a task that required a minimization of outcome variance. Importantly, the experimentally generated outcome correlations were explicitly represented neuronally in right midinsula with a learning prediction error signal expressed in rostral anterior cingulate cortex. Thus, our data show that the human brain represents higher-order correlation structures between rewards, a core adaptive ability whose immediate benefit is optimized sampling. PMID:21943609

  13. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  14. Search, Memory, and Choice Error: An Experiment

    PubMed Central

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  15. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  16. Robust design of microchannel cooler

    NASA Astrophysics Data System (ADS)

    He, Ye; Yang, Tao; Hu, Li; Li, Leimin

    2005-12-01

    Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.

  17. Design of static synchronous series compensator based damping controller employing invasive weed optimization algorithm.

    PubMed

    Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul

    2014-01-01

    This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.

  18. Application of Modified Particle Swarm Optimization Method for Parameter Extraction of 2-D TEC Mapping

    NASA Astrophysics Data System (ADS)

    Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.

    2012-04-01

    Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.

  19. Research into Kinect/Inertial Measurement Units Based on Indoor Robots.

    PubMed

    Li, Huixia; Wen, Xi; Guo, Hang; Yu, Min

    2018-03-12

    As indoor mobile navigation suffers from low positioning accuracy and accumulation error, we carried out research into an integrated location system for a robot based on Kinect and an Inertial Measurement Unit (IMU). In this paper, the close-range stereo images are used to calculate the attitude information and the translation amount of the adjacent positions of the robot by means of the absolute orientation algorithm, for improving the calculation accuracy of the robot's movement. Relying on the Kinect visual measurement and the strap-down IMU devices, we also use Kalman filtering to obtain the errors of the position and attitude outputs, in order to seek the optimal estimation and correct the errors. Experimental results show that the proposed method is able to improve the positioning accuracy and stability of the indoor mobile robot.

  20. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  1. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters—Part II: Observability/Estimability Analysis

    PubMed Central

    Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.

    2017-01-01

    This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494

  2. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  3. Compensated gain control circuit for buck regulator command charge circuit

    DOEpatents

    Barrett, David M.

    1996-01-01

    A buck regulator command charge circuit includes a compensated-gain control signal for compensating for changes in the component values in order to achieve optimal voltage regulation. The compensated-gain control circuit includes an automatic-gain control circuit for generating a variable-gain control signal. The automatic-gain control circuit is formed of a precision rectifier circuit, a filter network, an error amplifier, and an integrator circuit.

  4. Compensated gain control circuit for buck regulator command charge circuit

    DOEpatents

    Barrett, D.M.

    1996-11-05

    A buck regulator command charge circuit includes a compensated-gain control signal for compensating for changes in the component values in order to achieve optimal voltage regulation. The compensated-gain control circuit includes an automatic-gain control circuit for generating a variable-gain control signal. The automatic-gain control circuit is formed of a precision rectifier circuit, a filter network, an error amplifier, and an integrator circuit. 5 figs.

  5. Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    2017-10-01

    Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.

  6. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  7. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  8. Efficient calculation of higher-order optical waveguide dispersion.

    PubMed

    Mores, J A; Malheiros-Silveira, G N; Fragnito, H L; Hernández-Figueroa, H E

    2010-09-13

    An efficient numerical strategy to compute the higher-order dispersion parameters of optical waveguides is presented. For the first time to our knowledge, a systematic study of the errors involved in the higher-order dispersions' numerical calculation process is made, showing that the present strategy can accurately model those parameters. Such strategy combines a full-vectorial finite element modal solver and a proper finite difference differentiation algorithm. Its performance has been carefully assessed through the analysis of several key geometries. In addition, the optimization of those higher-order dispersion parameters can also be carried out by coupling to the present scheme a genetic algorithm, as shown here through the design of a photonic crystal fiber suitable for parametric amplification applications.

  9. Code Optimization Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAGEE,GLEN I.

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less

  10. Continuous piecewise-linear, reduced-order electrochemical model for lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid

    2017-02-01

    Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.

  11. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less

  12. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    We address the problem of quantum reading of optical memories, namely the retrieving of classical information stored in the optical properties of a media with minimum energy. We present optimal strategies for ambiguous and unambiguous quantum reading of unitary optical memories, namely when one's task is to minimize the probability of errors in the retrieved information and when perfect retrieving of information is achieved probabilistically, respectively. A comparison of the optimal strategy with coherent probes and homodyne detection shows that the former saves orders of magnitude of energy when achieving the same performances. Experimental proposals for quantum reading which aremore » feasible with present quantum optical technology are reported.« less

  14. Constructing analytic solutions on the Tricomi equation

    NASA Astrophysics Data System (ADS)

    Ghiasi, Emran Khoshrouye; Saleh, Reza

    2018-04-01

    In this paper, homotopy analysis method (HAM) and variational iteration method (VIM) are utilized to derive the approximate solutions of the Tricomi equation. Afterwards, the HAM is optimized to accelerate the convergence of the series solution by minimizing its square residual error at any order of the approximation. It is found that effect of the optimal values of auxiliary parameter on the convergence of the series solution is not negligible. Furthermore, the present results are found to agree well with those obtained through a closed-form equation available in the literature. To conclude, it is seen that the two are effective to achieve the solution of the partial differential equations.

  15. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    NASA Astrophysics Data System (ADS)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  16. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  17. Novel parametric reduced order model for aeroengine blade dynamics

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis

    2015-10-01

    The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.

  18. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    PubMed

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.

  19. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  20. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  1. Beauty from the beast: Avoiding errors in responding to client questions.

    PubMed

    Waehler, Charles A; Grandy, Natalie M

    2016-09-01

    Those rare moments when clients ask direct questions of their therapists likely represent a point when they are particularly open to new considerations, thereby representing an opportunity for substantial therapeutic gains. However, clinical errors abound in this area because clients' questions often engender apprehension in therapists, causing therapists to respond with too little or too much information or shutting down the discussion prematurely. These response types can damage the therapeutic relationship, the psychotherapy process, or both. We explore the nature of these clinical errors in response to client questions by providing examples from our own clinical work, suggesting potential reasons why clinicians may not make optimal use of client questions, and discussing how the mixed psychological literature further complicates the issue. We also present four guidelines designed to help therapists, trainers, and supervisors respond constructively to clinical questions in order to create constructive interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  3. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  4. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  5. Lithographic performance comparison with various RET for 45-nm node with hyper NA

    NASA Astrophysics Data System (ADS)

    Adachi, Takashi; Inazuki, Yuichi; Sutou, Takanori; Kitahata, Yasuhisa; Morikawa, Yasutaka; Toyama, Nobuhito; Mohri, Hiroshi; Hayashi, Naoya

    2006-05-01

    In order to realize 45 nm node lithography, strong resolution enhancement technology (RET) and water immersion will be needed. In this research, we discussed about various RET performance comparison for 45 nm node using 3D rigorous simulation. As a candidate, we chose binary mask (BIN), several kinds of attenuated phase-shifting mask (att-PSM) and chrome-less phase-shifting lithography mask (CPL). The printing performance was evaluated and compared for each RET options, after the optimizing illumination conditions, mask structure and optical proximity correction (OPC). The evaluation items of printing performance were CD-DOF, contrast-DOF, conventional ED-window and MEEF, etc. It's expected that effect of mask 3D topography becomes important at 45 nm node, so we argued about not only the case of ideal structures, but also the mask topography error effects. Several kinds of mask topography error were evaluated and we confirmed how these errors affect to printing performance.

  6. Optimal design of a microgripper-type actuator based on AlN/Si heterogeneous bimorph

    NASA Astrophysics Data System (ADS)

    Ruiz, D.; Díaz-Molina, A.; Sigmund, O.; Donoso, A.; Bellido, J. C.; Sánchez-Rojas, J. L.

    2017-05-01

    This work presents a systematic procedure to design piezoelectrically actuated microgrippers. Topology optimization combined with optimal design of electrodes is used to maximize the displacement at the output port of the gripper. The fabrication at the microscale leads us to overcome an important issue: the difficulty of placing a piezoelectric film on both top and bottom of the host layer. Due to the non-symmetric lamination of the structure, an out-of-plane bending spoils the behaviour of the gripper. Suppression of this out-of-plane deformation is the main novelty introduced. In addition, a robust formulation approach is used in order to control the length scale in the whole domain and to reduce sensitivity of the designs to small manufacturing errors.

  7. Bidirectional optimization of the melting spinning process.

    PubMed

    Liang, Xiao; Ding, Yongsheng; Wang, Zidong; Hao, Kuangrong; Hone, Kate; Wang, Huaping

    2014-02-01

    A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.

  8. Modelling space-based integral-field spectrographs and their application to Type Ia supernova cosmology

    NASA Astrophysics Data System (ADS)

    Shukla, Hemant; Bonissent, Alain

    2017-04-01

    We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.

  9. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  10. Toward an optimal online checkpoint solution under a two-level HPC checkpoint model

    DOE PAGES

    Di, Sheng; Robert, Yves; Vivien, Frederic; ...

    2016-03-29

    The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less

  11. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  12. The role of photographic parameters in laser speckle or particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Krothapalli, A.

    1987-01-01

    The parameters involved in obtaining the multiple exposure photographs in the laser speckle velocimetry method (to record the light scattering by the seeding particles) were optimized. The effects of the type, concentration, and dimensions of the tracer, the exposure conditions (time between exposures, exposure time, and number of exposures), and the sensitivity and resolution of the film on the quality of the final results were investigated, photographing an experimental flow behind an impulsively started circular cylinder. The velocity data were acquired by digital processing of Young's fringes, produced by point-by-point scanning of a photographic negative. Using the optimal photographing conditions, the errors involved in the estimation of the fringe angle and spacing were of the order of 1 percent for the spacing and +/1 deg for the fringe orientation. The resulting accuracy in the velocity was of the order of 2-3 percent of the maximum velocity in the field.

  13. H2-norm for mesh optimization with application to electro-thermal modeling of an electric wire in automotive context

    NASA Astrophysics Data System (ADS)

    Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia

    2017-04-01

    In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.

  14. Development of a software for quantitative evaluation radiotherapy target and organ-at-risk segmentation comparison.

    PubMed

    Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D

    2014-02-01

    Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.

  15. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  16. In-Flight Pitot-Static Calibration

    NASA Technical Reports Server (NTRS)

    Foster, John V. (Inventor); Cunningham, Kevin (Inventor)

    2016-01-01

    A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.

  17. Robust optimization of a tandem grating solar thermal absorber

    NASA Astrophysics Data System (ADS)

    Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae

    2018-04-01

    Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.

  18. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  19. NEW DEVELOPMENTS ON INVERSE POLYGON MAPPING TO CALCULATE GRAVITATIONAL LENSING MAGNIFICATION MAPS: OPTIMIZED COMPUTATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mediavilla, E.; Lopez, P.; Mediavilla, T.

    2011-11-01

    We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation.more » Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N {sup -3/4} dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.« less

  20. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  1. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  2. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  3. Efficiently characterizing the total error in quantum circuits

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    A promising technological advancement meant to enlarge our computational means is the quantum computer. Such a device would harvest the quantum complexity of the physical world in order to unfold concrete mathematical problems more efficiently. However, the errors emerging from the implementation of quantum operations are likewise quantum, and hence share a similar level of intricacy. Fortunately, randomized benchmarking protocols provide an efficient way to characterize the operational noise within quantum devices. The resulting figures of merit, like the fidelity and the unitarity, are typically attached to a set of circuit components. While important, this doesn't fulfill the main goal: determining if the error rate of the total circuit is small enough in order to trust its outcome. In this work, we fill the gap by providing an optimal bound on the total fidelity of a circuit in terms of component-wise figures of merit. Our bound smoothly interpolates between the classical regime, in which the error rate grows linearly in the circuit's length, and the quantum regime, which can naturally allow quadratic growth. Conversely, our analysis substantially improves the bounds on single circuit element fidelities obtained through techniques such as interleaved randomized benchmarking. This research was supported by the U.S. Army Research Office through Grant W911NF- 14-1-0103, CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.

  4. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  5. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  6. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  7. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  8. Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0

    NASA Astrophysics Data System (ADS)

    Millán-Otoya, Jorge E.; Boettcher, Stefan

    2014-03-01

    Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3

  9. Reduction in chemotherapy order errors with computerized physician order entry.

    PubMed

    Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J

    2014-01-01

    To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.

  10. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  11. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.

    PubMed

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M

    2018-04-12

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

  12. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes

    PubMed Central

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.

    2018-01-01

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114

  13. Performance optimization of dense-array concentrator photovoltaic system considering effects of circumsolar radiation and slope error.

    PubMed

    Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui

    2015-07-27

    This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.

  14. Robust 2-Qubit Gates in a Linear Ion Crystal Using a Frequency-Modulated Driving Force

    NASA Astrophysics Data System (ADS)

    Leung, Pak Hong; Landsman, Kevin A.; Figgatt, Caroline; Linke, Norbert M.; Monroe, Christopher; Brown, Kenneth R.

    2018-01-01

    In an ion trap quantum computer, collective motional modes are used to entangle two or more qubits in order to execute multiqubit logical gates. Any residual entanglement between the internal and motional states of the ions results in loss of fidelity, especially when there are many spectator ions in the crystal. We propose using a frequency-modulated driving force to minimize such errors. In simulation, we obtained an optimized frequency-modulated 2-qubit gate that can suppress errors to less than 0.01% and is robust against frequency drifts over ±1 kHz . Experimentally, we have obtained a 2-qubit gate fidelity of 98.3(4)%, a state-of-the-art result for 2-qubit gates with five ions.

  15. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  16. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  17. Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.

    PubMed

    Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues

    2012-09-01

    The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  19. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  20. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  1. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  2. A Dynamic Attitude Measurement System Based on LINS

    PubMed Central

    Li, Hanzhou; Pan, Quan; Wang, Xiaoxu; Zhang, Juanni; Li, Jiang; Jiang, Xiangjun

    2014-01-01

    A dynamic attitude measurement system (DAMS) is developed based on a laser inertial navigation system (LINS). Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR) filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG). The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ) and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min. PMID:25177802

  3. Electrophysiological Correlates of Error Monitoring and Feedback Processing in Second Language Learning.

    PubMed

    Bultena, Sybrine; Danielmeier, Claudia; Bekkering, Harold; Lemhöfer, Kristin

    2017-01-01

    Humans monitor their behavior to optimize performance, which presumably relies on stable representations of correct responses. During second language (L2) learning, however, stable representations have yet to be formed while knowledge of the first language (L1) can interfere with learning, which in some cases results in persistent errors. In order to examine how correct L2 representations are stabilized, this study examined performance monitoring in the learning process of second language learners for a feature that conflicts with their first language. Using EEG, we investigated if L2 learners in a feedback-guided word gender assignment task showed signs of error detection in the form of an error-related negativity (ERN) before and after receiving feedback, and how feedback is processed. The results indicated that initially, response-locked negativities for correct (CRN) and incorrect (ERN) responses were of similar size, showing a lack of internal error detection when L2 representations are unstable. As behavioral performance improved following feedback, the ERN became larger than the CRN, pointing to the first signs of successful error detection. Additionally, we observed a second negativity following the ERN/CRN components, the amplitude of which followed a similar pattern as the previous negativities. Feedback-locked data indicated robust FRN and P300 effects in response to negative feedback across different rounds, demonstrating that feedback remained important in order to update memory representations during learning. We thus show that initially, L2 representations may often not be stable enough to warrant successful error monitoring, but can be stabilized through repeated feedback, which means that the brain is able to overcome L1 interference, and can learn to detect errors internally after a short training session. The results contribute a different perspective to the discussion on changes in ERN and FRN components in relation to learning, by extending the investigation of these effects to the language learning domain. Furthermore, these findings provide a further characterization of the online learning process of L2 learners.

  4. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  5. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  6. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  7. An ArcGIS decision support tool for artificial reefs site selection (ArcGIS ARSS)

    NASA Astrophysics Data System (ADS)

    Stylianou, Stavros; Zodiatis, George

    2017-04-01

    Although the use and benefits of artificial reefs, both socio-economic and environmental, have been recognized with research and national development programmes worldwide their development is rarely subjected to a rigorous site selection process and the majority of the projects use the traditional (non-GIS) approach, based on trial and error mode. Recent studies have shown that the use of Geographic Information Systems, unlike to traditional methods, for the identification of suitable areas for artificial reefs siting seems to offer a number of distinct advantages minimizing possible errors, time and cost. A decision support tool (DSS) has been developed based on the existing knowledge, the multi-criteria decision analysis techniques and the GIS approach used in previous studies in order to help the stakeholders to identify the optimal locations for artificial reefs deployment on the basis of the physical, biological, oceanographic and socio-economic features of the sites. The tool provides to the users the ability to produce a final report with the results and suitability maps. The ArcGIS ARSS support tool runs within the existing ArcMap 10.2.x environment and for the development the VB .NET high level programming language has been used along with ArcObjects 10.2.x. Two local-scale case studies were conducted in order to test the application of the tool focusing on artificial reef siting. The results obtained from the case studies have shown that the tool can be successfully integrated within the site selection process in order to select objectively the optimal site for artificial reefs deployment.

  8. Optimizing Automatic Deployment Using Non-functional Requirement Annotations

    NASA Astrophysics Data System (ADS)

    Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin

    Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.

  9. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  10. Kinematic geometry of osteotomies.

    PubMed

    Smith, Erin J; Bryant, J Tim; Ellis, Randy E

    2005-01-01

    This paper presents a novel method for defining an osteotomy that can be used to represent all types of osteotomy procedures. In essence, we model an osteotomy as a lower-pair mechanical joint to derive the kinematic geometry of the osteotomy. This method was implemented using a commercially available animation software suite in order to simulate a variety of osteotomy procedures. Two osteotomy procedures are presented for a femoral malunion in order to demonstrate the advantages of our kinematic model in developing optimal osteotomy plans. The benefits of this kinematic model include the ability to evaluate the effects of various kinds of osteotomy and the elimination of potentially error-prone radiographic assessment of deformities.

  11. On the performance evaluation of LQAM-MPPM techniques over exponentiated Weibull fading free-space optical channels

    NASA Astrophysics Data System (ADS)

    Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.

    2018-06-01

    We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.

  12. The incidence and severity of errors in pharmacist-written discharge medication orders.

    PubMed

    Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah

    2017-08-01

    Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.

  13. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  14. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  15. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  16. Diffraction grating strain gauge method: error analysis and its application for the residual stress measurement in thermal barrier coatings

    NASA Astrophysics Data System (ADS)

    Yin, Yuanjie; Fan, Bozhao; He, Wei; Dai, Xianglu; Guo, Baoqiao; Xie, Huimin

    2018-03-01

    Diffraction grating strain gauge (DGSG) is an optical strain measurement method. Based on this method, a six-spot diffraction grating strain gauge (S-DGSG) system has been developed with the advantages of high and adjustable sensitivity, compact structure, and non-contact measurement. In this study, this system is applied for the residual stress measurement in thermal barrier coatings (TBCs) combining the hole-drilling method. During the experiment, the specimen’s location is supposed to be reset accurately before and after the hole-drilling, however, it is found that the rigid body displacements from the resetting process could seriously influence the measurement accuracy. In order to understand and eliminate the effects from the rigid body displacements, such as the three-dimensional (3D) rotations and the out-of-plane displacement of the grating, the measurement error of this system is systematically analyzed, and an optimized method is proposed. Moreover, a numerical experiment and a verified tensile test are conducted, and the results verify the applicability of this optimized method successfully. Finally, combining this optimized method, a residual stress measurement experiment is conducted, and the results show that this method can be applied to measure the residual stress in TBCs.

  17. Rational decision-making in inhibitory control.

    PubMed

    Shenoy, Pradeep; Yu, Angela J

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.

  18. Rational Decision-Making in Inhibitory Control

    PubMed Central

    Shenoy, Pradeep; Yu, Angela J.

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306

  19. An Optimized Trajectory Planning for Welding Robot

    NASA Astrophysics Data System (ADS)

    Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao

    2018-03-01

    In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.

  20. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  2. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    PubMed

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  3. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  4. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  5. Optimized method for manufacturing large aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  6. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.

  7. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  8. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  9. Pareto Design of State Feedback Tracking Control of a Biped Robot via Multiobjective PSO in Comparison with Sigma Method and Genetic Algorithms: Modified NSGAII and MATLAB's Toolbox

    PubMed Central

    Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.

    2014-01-01

    An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619

  10. Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations

    PubMed Central

    Wang, Ming; Zhong, Lin

    2015-01-01

    In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948

  11. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  12. Multifractal diffusion entropy analysis: Optimal bin width of probability histograms

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Korbel, Jan

    2014-11-01

    In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.

  13. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  14. Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.

  15. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  16. How unrealistic optimism is maintained in the face of reality.

    PubMed

    Sharot, Tali; Korn, Christoph W; Dolan, Raymond J

    2011-10-09

    Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.

  17. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  18. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  19. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads

    PubMed Central

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.

    2010-01-01

    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  20. Potential benefit of electronic pharmacy claims data to prevent medication history errors and resultant inpatient order errors

    PubMed Central

    Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A

    2016-01-01

    Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817

  1. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  2. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  3. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  4. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  5. Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed

    PubMed Central

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-01-01

    Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527

  6. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  7. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  8. Retinal image quality during accommodation.

    PubMed

    López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N

    2013-07-01

    We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  9. Influence of Forecast Accuracy of Photovoltaic Power Output on Facility Planning and Operation of Microgrid under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.

  10. Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B

    NASA Astrophysics Data System (ADS)

    Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete

    2018-04-01

    The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.

  11. Double absorbing boundaries for finite-difference time-domain electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu

    We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.

  12. Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone

    NASA Technical Reports Server (NTRS)

    Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.

    1996-01-01

    We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.

  13. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  14. Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower

    NASA Astrophysics Data System (ADS)

    Fujii, Kenzo; Yamamoto, Toru

    In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.

  15. Streamlining the medication process improves safety in the intensive care unit.

    PubMed

    Benoit, E; Eckert, P; Theytaz, C; Joris-Frasseren, M; Faouzi, M; Beney, J

    2012-09-01

    Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.

  16. Impact of pharmacist interventions in older patients: a prospective study in a tertiary hospital in Germany

    PubMed Central

    Cortejoso, L; Dietz, RA; Hofmann, G; Gosch, M; Sattler, A

    2016-01-01

    Background Inappropriate pharmacotherapy among older adults remains a critical issue in our health care systems. Besides polypharmacy and multiple comorbidities, the age-related pharmacokinetic and pharmacodynamic changes may increase the risk of adverse drug reactions and medication errors. Objective The main target of this study was to describe the characteristics of pharmaceutical interventions in two geriatric wards (orthogeriatric ward and geriatric day unit) of a general teaching hospital and to evaluate the clinical significance of the detected medication errors. Materials and methods The study was conducted between August 2014 and October 2015 and was based on a triple approach that included validation of medical orders, medication reconciliation at patients’ admission, and a predischarge planning appointment with the patient. The validation of medical orders was based on analyzing the suitability of the drugs prescribed, the drug dose depending on the patient’s characteristics, the presence of contraindications and interactions between drugs, and the proposal of alternative drugs included in the hospital formulary. Results A total of 2,307 interventions associated to a medication error in 15,282 medical orders for 1,859 older patients were recorded. The greater part of the interventions carried out on the orthogeriatric ward at admission and at discharge were due to omission of a drug in the medical order (20.0%) and clinically significant interactions requiring monitoring (30.4%), respectively. The main factor triggering pharmacist’s recommendations on the geriatric day unit was clinically significant interactions (21.1%). With regard to the clinical severity of the detected errors, 68.1% were considered significant, 24.8% were of minor significance, and 7.2% were clinically serious. Conclusion Our findings show the importance of clinical pharmacist involvement in the optimization of pharmacotherapy in older adults, ensuring that they receive effective, safe, and efficient drug therapy. PMID:27713625

  17. Adaptation to sensory-motor reflex perturbations is blind to the source of errors.

    PubMed

    Hudson, Todd E; Landy, Michael S

    2012-01-06

    In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.

  18. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  19. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  20. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  1. Feed-forward alignment correction for advanced overlay process control using a standalone alignment station "Litho Booster"

    NASA Astrophysics Data System (ADS)

    Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi

    2018-03-01

    One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined

  2. Cross-coupled control for all-terrain rovers.

    PubMed

    Reina, Giulio

    2013-01-08

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.

  3. FEC decoder design optimization for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Roy, Ashim; Lewi, Leng

    1990-01-01

    A new telecommunications service for location determination via satellite is being proposed for the continental USA and Europe, which provides users with the capability to find the location of, and communicate from, a moving vehicle to a central hub and vice versa. This communications system is expected to operate in an extremely noisy channel in the presence of fading. In order to achieve high levels of data integrity, it is essential to employ forward error correcting (FEC) encoding and decoding techniques in such mobile satellite systems. A constraint length k = 7 FEC decoder has been implemented in a single chip for such systems. The single chip implementation of the maximum likelihood decoder helps to minimize the cost, size, and power consumption, and improves the bit error rate (BER) performance of the mobile earth terminal (MET).

  4. Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.

    PubMed

    Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert

    2009-05-20

    The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.

  5. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  6. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-09-01

    We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  7. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  8. The Whole Warps the Sum of Its Parts.

    PubMed

    Corbett, Jennifer E

    2017-01-01

    The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.

  9. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  10. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  11. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  12. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    NASA Astrophysics Data System (ADS)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  13. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  14. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  15. Storage capacity: how big should it be

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malina, M.A.

    1980-01-28

    A mathematical model was developed for determining the economically optimal storage capacity of a given material or product at a manufacturing plant. The optimum was defined as a trade-off between the inventory-holding costs and the cost of customer-service failures caused by insufficient stocks for a peak-demand period. The order-arrival, production, storage, and shipment process was simulated by Monte Carlo techniques to calculate the probability of order delays for various lengths of time as a function of storage capacity. Example calculations for the storage of a bulk liquid chemical in tanks showed that the conclusions arrived at, via this model, aremore » comparatively insensitive to errors made in estimating the capital cost of storage or the risk of losing an order because of a late delivery.« less

  16. Design and study of water supply system for supercritical unit boiler in thermal power station

    NASA Astrophysics Data System (ADS)

    Du, Zenghui

    2018-04-01

    In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.

  17. Optimization of control gain by operator adjustment

    NASA Technical Reports Server (NTRS)

    Kruse, W.; Rothbauer, G.

    1973-01-01

    An optimal gain was established by measuring errors at 5 discrete control gain settings in an experimental set-up consisting of a 2-dimensional, first-order pursuit tracking task performed by subjects (S's). No significant experience effect on optimum gain setting was found in the first experiment. During the second experiment, in which control gain was continuously adjustable, high experienced S's tended to reach the previously determined optimum gain quite accurately and quickly. Less experienced S's tended to select a marginally optimum gain either below or above the experimentally determined optimum depending on initial control gain setting, although mean settings of both groups were equal. This quick and simple method is recommended for selecting control gains for different control systems and forcing functions.

  18. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    NASA Astrophysics Data System (ADS)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these errors to those of manually designed weather station networks in West Africa, planned by the respective host-country's meteorological agency.

  19. Computerized Orders with Standardized Concentrations Decrease Dispensing Errors of Continuous Infusion Medications for Pediatrics

    PubMed Central

    Sowan, Azizeh K.; Vaidya, Vinay U.; Soeken, Karen L.; Hilmas, Elora

    2010-01-01

    OBJECTIVES The use of continuous infusion medications with individualized concentrations may increase the risk for errors in pediatric patients. The objective of this study was to evaluate the effect of computerized prescriber order entry (CPOE) for continuous infusions with standardized concentrations on frequency of pharmacy processing errors. In addition, time to process handwritten versus computerized infusion orders was evaluated and user satisfaction with CPOE as compared to handwritten orders was measured. METHODS Using a crossover design, 10 pharmacists in the pediatric satellite within a university teaching hospital were given test scenarios of handwritten and CPOE order sheets and asked to process infusion orders using the pharmacy system in order to generate infusion labels. Participants were given three groups of orders: five correct handwritten orders, four handwritten orders written with deliberate errors, and five correct CPOE orders. Label errors were analyzed and time to complete the task was recorded. RESULTS Using CPOE orders, participants required less processing time per infusion order (2 min, 5 sec ± 58 sec) compared with time per infusion order in the first handwritten order sheet group (3 min, 7 sec ± 1 min, 20 sec) and the second handwritten order sheet group (3 min, 26 sec ± 1 min, 8 sec), (p<0.01). CPOE eliminated all error types except wrong concentration. With CPOE, 4% of infusions processed contained errors, compared with 26% of the first group of handwritten orders and 45% of the second group of handwritten orders (p<0.03). Pharmacists were more satisfied with CPOE orders when compared with the handwritten method (p=0.0001). CONCLUSIONS CPOE orders saved pharmacists' time and greatly improved the safety of processing continuous infusions, although not all errors were eliminated. pharmacists were overwhelmingly satisfied with the CPOE orders PMID:22477811

  20. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  1. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  2. Optimally combined regional geoid models for the realization of height systems in developing countries - ORG4heights

    NASA Astrophysics Data System (ADS)

    Lieb, Verena; Schmidt, Michael; Willberg, Martin; Pail, Roland

    2017-04-01

    Precise height systems require high-resolution and high-quality gravity data. However, such data sets are sparse especially in developing or newly industrializing countries. Thus, we initiated the DFG-project "ORG4heights" for the formulation of a general scientific concept how to (1) optimally combine all available data sets and (2) estimate realistic errors. The resulting regional gravity field models then deliver the fundamental basis for (3) establishing physical national height systems. The innovative key aspects of the project incorporate the development of a method which links (low- up to mid-resolution) gravity satellite mission data and (high- down to low-quality) terrestrial data. Hereby, an optimal combination of the data utilizing their highest measure of information including uncertainty quantification and analyzing systematic omission errors is pursued. Regional gravity field modeling via Multi-Resolution Representation (MRR) and Least Squares Collocation (LSC) are studied in detail and compared based on their theoretical fundamentals. From the findings, MRR shall be further developed towards implementing a pyramid algorithm. Within the project, we investigate comprehensive case studies in Saudi Arabia and South America, i. e. regions with varying topography, by means of simulated data with heterogeneous distribution, resolution, quality and altitude. GPS and tide gauge records serve as complementary input or validation data. The resulting products include error propagation, internal and external validation. A generalized concept then is derived in order to establish physical height systems in developing countries. The recommendations may serve as guidelines for sciences and administration. We present the ideas and strategies of the project, which combines methodical development and practical applications with high socio-economic impact.

  3. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    NASA Astrophysics Data System (ADS)

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  4. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  5. Continuous-time adaptive critics.

    PubMed

    Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony

    2007-05-01

    A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.

  6. Excitation energies from particle-particle random phase approximation with accurate optimized effective potentials

    NASA Astrophysics Data System (ADS)

    Jin, Ye; Yang, Yang; Zhang, Du; Peng, Degao; Yang, Weitao

    2017-10-01

    The optimized effective potential (OEP) that gives accurate Kohn-Sham (KS) orbitals and orbital energies can be obtained from a given reference electron density. These OEP-KS orbitals and orbital energies are used here for calculating electronic excited states with the particle-particle random phase approximation (pp-RPA). Our calculations allow the examination of pp-RPA excitation energies with the exact KS density functional theory (DFT). Various input densities are investigated. Specifically, the excitation energies using the OEP with the electron densities from the coupled-cluster singles and doubles method display the lowest mean absolute error from the reference data for the low-lying excited states. This study probes into the theoretical limit of the pp-RPA excitation energies with the exact KS-DFT orbitals and orbital energies. We believe that higher-order correlation contributions beyond the pp-RPA bare Coulomb kernel are needed in order to achieve even higher accuracy in excitation energy calculations.

  7. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  8. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  9. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  10. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  11. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  12. Infrared Retrievals of Ice Cloud Properties and Uncertainties with an Optimal Estimation Retrieval Method

    NASA Astrophysics Data System (ADS)

    Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.

    2014-12-01

    We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.

  13. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    PubMed

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  15. Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery.

    PubMed

    Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im

    2017-02-01

    The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.

  16. High-order shock-fitted detonation propagation in high explosives

    NASA Astrophysics Data System (ADS)

    Romick, Christopher M.; Aslam, Tariq D.

    2017-03-01

    A highly accurate numerical shock and material interface fitting scheme composed of fifth-order spatial and third- or fifth-order temporal discretizations is applied to the two-dimensional reactive Euler equations in both slab and axisymmetric geometries. High rates of convergence are not typically possible with shock-capturing methods as the Taylor series analysis breaks down in the vicinity of discontinuities. Furthermore, for typical high explosive (HE) simulations, the effects of material interfaces at the charge boundary can also cause significant computational errors. Fitting a computational boundary to both the shock front and material interface (i.e. streamline) alleviates the computational errors associated with captured shocks and thus opens up the possibility of high rates of convergence for multi-dimensional shock and detonation flows. Several verification tests, including a Sedov blast wave, a Zel'dovich-von Neumann-Döring (ZND) detonation wave, and Taylor-Maccoll supersonic flow over a cone, are utilized to demonstrate high rates of convergence to nontrivial shock and reaction flows. Comparisons to previously published shock-capturing multi-dimensional detonations in a polytropic fluid with a constant adiabatic exponent (PF-CAE) are made, demonstrating significantly lower computational error for the present shock and material interface fitting method. For an error on the order of 10 m /s, which is similar to that observed in experiments, shock-fitting offers a computational savings on the order of 1000. In addition, the behavior of the detonation phase speed is examined for several slab widths to evaluate the detonation performance of PBX 9501 while utilizing the Wescott-Stewart-Davis (WSD) model, which is commonly used in HE modeling. It is found that the thickness effect curve resulting from this equation of state and reaction model using published values is dramatically more steep than observed in recent experiments. Utilizing the present fitting strategy, in conjunction with a nonlinear optimizer, a new set of reaction rate parameters improves the correlation of the model to experimental results. Finally, this new model is tested against two dimensional slabs as a validation test.

  17. Using traveling salesman problem algorithms for evolutionary tree construction.

    PubMed

    Korostensky, C; Gonnet, G H

    2000-07-01

    The construction of evolutionary trees is one of the major problems in computational biology, mainly due to its complexity. We present a new tree construction method that constructs a tree with minimum score for a given set of sequences, where the score is the amount of evolution measured in PAM distances. To do this, the problem of tree construction is reduced to the Traveling Salesman Problem (TSP). The input for the TSP algorithm are the pairwise distances of the sequences and the output is a circular tour through the optimal, unknown tree plus the minimum score of the tree. The circular order and the score can be used to construct the topology of the optimal tree. Our method can be used for any scoring function that correlates to the amount of changes along the branches of an evolutionary tree, for instance it could also be used for parsimony scores, but it cannot be used for least squares fit of distances. A TSP solution reduces the space of all possible trees to 2n. Using this order, we can guarantee that we reconstruct a correct evolutionary tree if the absolute value of the error for each distance measurement is smaller than f2.gif" BORDER="0">, where f3.gif" BORDER="0">is the length of the shortest edge in the tree. For data sets with large errors, a dynamic programming approach is used to reconstruct the tree. Finally simulations and experiments with real data are shown.

  18. Propagation of resist heating mask error to wafer level

    NASA Astrophysics Data System (ADS)

    Babin, S. V.; Karklin, Linard

    2006-10-01

    As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the mask error budget - the resist heating CD error. In simulation of exposure using a stepper, variable MEEF was considered.

  19. Measurement configuration optimization for dynamic metrology using Stokes polarimetry

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan

    2018-05-01

    As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.

  20. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  1. Production inventory policies for defective items with inspection errors, sales return, imperfect rework process and backorders

    NASA Astrophysics Data System (ADS)

    Jaggi, Chandra K.; Khanna, Aditi; Kishore, Aakanksha

    2016-03-01

    In order to sustain the challenges of maintaining good quality and perfect screening process, rework process becomes a rescue to compensate for the imperfections present in the production system. The proposed model attempts to explore the existing real-life situation with a more practical approach by incorporating the concept of imperfect rework as this occurs as an unavoidable problem to the firm due to irreparable disorders even in the reworked items. Hence, a production inventory model is formulated here to study the combined effect of imperfect quality items, faulty inspection process and imperfect rework process on the optimal production quantity and optimal backorder level. An analytical method is employed to maximize the expected total profit per unit time. Moreover, the results of several previous research articles namely Chiu et al (2006), Chiu et al (2005), Salameh and Hayek (2001), and classical EPQ with shortages are deduced as special cases. To demonstrate the applicability of the model, and to observe the effects of key parameters on the optimal replenishment policy, a numerical example along with a comprehensive sensitivity analysis has been presented. The pertinence of the model can be found in most of the manufacturing industries like textile, electronics, furniture, footwear, plastics etc. A production lot size model has been explored for defectives items with inspection errors and an imperfect rework process.

  2. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  3. Scheduling policies of intelligent sensors and sensor/actuators in flexible structures

    NASA Astrophysics Data System (ADS)

    Demetriou, Michael A.; Potami, Raffaele

    2006-03-01

    In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.

  4. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  5. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  6. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  7. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  8. Simulating correction of adjustable optics for an x-ray telescope

    NASA Astrophysics Data System (ADS)

    Aldcroft, Thomas L.; Schwartz, Daniel A.; Reid, Paul B.; Cotroneo, Vincenzo; Davis, William N.

    2012-10-01

    The next generation of large X-ray telescopes with sub-arcsecond resolution will require very thin, highly nested grazing incidence optics. To correct the low order figure errors resulting from initial manufacture, the mounting process, and the effects of going from 1 g during ground alignment to zero g on-orbit, we plan to adjust the shapes via piezoelectric "cells" deposited on the backs of the reflecting surfaces. This presentation investigates how well the corrections might be made. We take a benchmark conical glass element, 410×205 mm, with a 20×20 array of piezoelectric cells 19×9 mm in size. We use finite element analysis to calculate the influence function of each cell. We then simulate the correction via pseudo matrix inversion to calculate the stress to be applied by each cell, considering distortion due to gravity as calculated by finite element analysis, and by putative low order manufacturing distortions described by Legendre polynomials. We describe our algorithm and its performance, and the implications for the sensitivity of the resulting slope errors to the optimization strategy.

  9. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  10. Application of Reduced Order Transonic Aerodynamic Influence Coefficient Matrix for Design Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley W.

    2009-01-01

    Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.

  11. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  12. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  13. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).

  14. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting

    PubMed Central

    2017-01-01

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921

  15. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  16. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  17. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  18. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  19. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  20. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  1. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  2. A comparison of 12 algorithms for matching on the propensity score.

    PubMed

    Austin, Peter C

    2014-03-15

    Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  3. A comparison of 12 algorithms for matching on the propensity score

    PubMed Central

    Austin, Peter C

    2014-01-01

    Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24123228

  4. Outpatient CPOE orders discontinued due to 'erroneous entry': prospective survey of prescribers' explanations for errors.

    PubMed

    Hickman, Thu-Trang T; Quist, Arbor Jessica Lauren; Salazar, Alejandra; Amato, Mary G; Wright, Adam; Volk, Lynn A; Bates, David W; Schiff, Gordon

    2018-04-01

    Computerised prescriber order entry (CPOE) systems users often discontinue medications because the initial order was erroneous. To elucidate error types by querying prescribers about their reasons for discontinuing outpatient medication orders that they had self-identified as erroneous. During a nearly 3 year retrospective data collection period, we identified 57 972 drugs discontinued with the reason 'Error (erroneous entry)." Because chart reviews revealed limited information about these errors, we prospectively studied consecutive, discontinued erroneous orders by querying prescribers in near-real-time to learn more about the erroneous orders. From January 2014 to April 2014, we prospectively emailed prescribers about outpatient drug orders that they had discontinued due to erroneous initial order entry. Of 2 50 806 medication orders in these 4 months, 1133 (0.45%) of these were discontinued due to error. From these 1133, we emailed 542 unique prescribers to ask about their reason(s) for discontinuing these mediation orders in error. We received 312 responses (58% response rate). We categorised these responses using a previously published taxonomy. The top reasons for these discontinued erroneous orders included: medication ordered for wrong patient (27.8%, n=60); wrong drug ordered (18.5%, n=40); and duplicate order placed (14.4%, n=31). Other common discontinued erroneous orders related to drug dosage and formulation (eg, extended release versus not). Oxycodone (3%) was the most frequent drug discontinued error. Drugs are not infrequently discontinued 'in error.' Wrong patient and wrong drug errors constitute the leading types of erroneous prescriptions recognised and discontinued by prescribers. Data regarding erroneous medication entries represent an important source of intelligence about how CPOE systems are functioning and malfunctioning, providing important insights regarding areas for designing CPOE more safely in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Cross-Coupled Control for All-Terrain Rovers

    PubMed Central

    Reina, Giulio

    2013-01-01

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. PMID:23299625

  6. Precise X-ray and video overlay for augmented reality fluoroscopy.

    PubMed

    Chen, Xin; Wang, Lejing; Fallavollita, Pascal; Navab, Nassir

    2013-01-01

    The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.

  7. Measurement uncertainty associated with chromatic confocal profilometry for 3D surface texture characterization of natural human enamel.

    PubMed

    Mullan, F; Bartlett, D; Austin, R S

    2017-06-01

    To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  8. Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)

    NASA Astrophysics Data System (ADS)

    Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan

    2017-02-01

    Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.

  9. Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty

    NASA Astrophysics Data System (ADS)

    Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team

    2017-11-01

    A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.

  10. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  11. Optimize Short Term load Forcasting Anomalous Based Feed Forward Backpropagation

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Abdullah, A. G.; Rohmah, K. A.

    2017-03-01

    This paper contains the Short-Term Load Forecasting (STLF) using artificial neural network especially feed forward back propagation algorithm which is particularly optimized in order to getting a reduced error value result. Electrical load forecasting target is a holiday that hasn’t identical pattern and different from weekday’s pattern, in other words the pattern of holiday load is an anomalous. Under these conditions, the level of forecasting accuracy will be decrease. Hence we need a method that capable to reducing error value in anomalous load forecasting. Learning process of algorithm is supervised or controlled, then some parameters are arranged before performing computation process. Momentum constant a value is set at 0.8 which serve as a reference because it has the greatest converge tendency. Learning rate selection is made up to 2 decimal digits. In addition, hidden layer and input component are tested in several variation of number also. The test result leads to the conclusion that the number of hidden layer impact on the forecasting accuracy and test duration determined by the number of iterations when performing input data until it reaches the maximum of a parameter value.

  12. A study on the applications of AI in finishing of additive manufacturing parts

    NASA Astrophysics Data System (ADS)

    Fathima Patham, K.

    2017-06-01

    Artificial intelligent and computer simulation are the technological powerful tools for solving complex problems in the manufacturing industries. Additive Manufacturing is one of the powerful manufacturing techniques that provide design flexibilities to the products. The products with complex shapes are directly manufactured without the need of any machining and tooling using Additive Manufacturing. However, the main drawback of the components produced using the Additive Manufacturing processes is the quality of the surfaces. This study aims to minimize the defects caused during Additive Manufacturing with the aid of Artificial Intelligence. The developed AI system has three layers, each layer is trying to eliminate or minimize the production errors. The first layer of the AI system optimizes the digitization of the 3D CAD model of the product and hence reduces the stair case errors. The second layer of the AI system optimizes the 3D printing machine parameters in order to eliminate the warping effect. The third layer of AI system helps to choose the surface finishing technique suitable for the printed component based on the Degree of Complexity of the product and the material. The efficiency of the developed AI system was examined on the functional parts such as gears.

  13. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  14. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  15. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  16. [The research of near-infrared blood glucose measurement using particle swarm optimization and artificial neural network].

    PubMed

    Dai, Juan; Ji, Zhong; Du, Yubao

    2017-08-01

    Existing near-infrared non-invasive blood glucose detection modelings mostly detect multi-spectral signals with different wavelength, which is not conducive to the popularization of non-invasive glucose meter at home and does not consider the physiological glucose dynamics of individuals. In order to solve these problems, this study presented a non-invasive blood glucose detection model combining particle swarm optimization (PSO) and artificial neural network (ANN) by using the 1 550 nm near-infrared absorbance as the independent variable and the concentration of blood glucose as the dependent variable, named as PSO-2ANN. The PSO-2ANN model was based on two sub-modules of neural networks with certain structures and arguments, and was built up after optimizing the weight coefficients of the two networks by particle swarm optimization. The results of 10 volunteers were predicted by PSO-2ANN. It was indicated that the relative error of 9 volunteers was less than 20%; 98.28% of the predictions of blood glucose by PSO-2ANN were distributed in the regions A and B of Clarke error grid, which confirmed that PSO-2ANN could offer higher prediction accuracy and better robustness by comparison with ANN. Additionally, even the physiological glucose dynamics of individuals may be different due to the influence of environment, temper, mental state and so on, PSO-2ANN can correct this difference only by adjusting one argument. The PSO-2ANN model provided us a new prospect to overcome individual differences in blood glucose prediction.

  17. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay

    2016-10-01

    Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.

  18. DNA assembly with error correction on a droplet digital microfluidics platform.

    PubMed

    Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B

    2018-06-01

    Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.

  19. The spectral basis of optimal error field correction on DIII-D

    DOE PAGES

    Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...

    2014-04-28

    Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less

  20. Orbital-optimized MP2.5 and its analytic gradients: Approaching CCSD(T) quality for noncovalent interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozkaya, Uğur, E-mail: ugur.bozkaya@atauni.edu.tr; Center for Computational Molecular Science and Technology, School of Chemistry and Biochemistry, and School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332; Sherrill, C. David

    2014-11-28

    Orbital-optimized MP2.5 [or simply “optimized MP2.5,” OMP2.5, for short] and its analytic energy gradients are presented. The cost of the presented method is as much as that of coupled-cluster singles and doubles (CCSD) [O(N{sup 6}) scaling] for energy computations. However, for analytic gradient computations the OMP2.5 method is only half as expensive as CCSD because there is no need to solve λ{sub 2}-amplitude equations for OMP2.5. The performance of the OMP2.5 method is compared with that of the standard second-order Møller–Plesset perturbation theory (MP2), MP2.5, CCSD, and coupled-cluster singles and doubles with perturbative triples (CCSD(T)) methods for equilibrium geometries, hydrogenmore » transfer reactions between radicals, and noncovalent interactions. For bond lengths of both closed and open-shell molecules, the OMP2.5 method improves upon MP2.5 and CCSD by 38%–43% and 31%–28%, respectively, with Dunning's cc-pCVQZ basis set. For complete basis set (CBS) predictions of hydrogen transfer reaction energies, the OMP2.5 method exhibits a substantially better performance than MP2.5, providing a mean absolute error of 1.1 kcal mol{sup −1}, which is more than 10 times lower than that of MP2.5 (11.8 kcal mol{sup −1}), and comparing to MP2 (14.6 kcal mol{sup −1}) there is a more than 12-fold reduction in errors. For noncovalent interaction energies (at CBS limits), the OMP2.5 method maintains the very good performance of MP2.5 for closed-shell systems, and for open-shell systems it significantly outperforms MP2.5 and CCSD, and approaches CCSD(T) quality. The MP2.5 errors decrease by a factor of 5 when the optimized orbitals are used for open-shell noncovalent interactions, and comparing to CCSD there is a more than 3-fold reduction in errors. Overall, the present application results indicate that the OMP2.5 method is very promising for open-shell noncovalent interactions and other chemical systems with difficult electronic structures.« less

  1. Strapdown system performance optimization test evaluations (SPOT), volume 1

    NASA Technical Reports Server (NTRS)

    Blaha, R. J.; Gilmore, J. P.

    1973-01-01

    A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.

  2. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    PubMed

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  3. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  4. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  5. Conforming and nonconforming virtual element methods for elliptic problems

    DOE PAGES

    Cangiani, Andrea; Manzini, Gianmarco; Sutton, Oliver J.

    2016-08-03

    Here we present, in a unified framework, new conforming and nonconforming virtual element methods for general second-order elliptic problems in two and three dimensions. The differential operator is split into its symmetric and nonsymmetric parts and conditions for stability and accuracy on their discrete counterparts are established. These conditions are shown to lead to optimal H 1- and L 2-error estimates, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the two methods is shown to be comparable.

  6. A finite-difference method for the variable coefficient Poisson equation on hierarchical Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Raeli, Alice; Bergmann, Michel; Iollo, Angelo

    2018-02-01

    We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

  7. Application of Frame Theory in Intelligent Packet-Based Communication Networks

    NASA Astrophysics Data System (ADS)

    Escobar-Moreira, León A.

    2007-09-01

    Frames are a stable and redundant representation of signals in a Hilbert space that have been used in signal processing because of their resilience to additive noise, quantization error, and their robustness to losses in packet-based networks [1,2]. Depending on the number of erasures (losses), there are some considerations to be taken into account in order to optimize the frame design. Further discussions will explain the innate characteristics of frames to include intelligence on the packet-based communication devices (routers) to increase their performance under different channel behaviors.

  8. Conforming and nonconforming virtual element methods for elliptic problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Manzini, Gianmarco; Sutton, Oliver J.

    Here we present, in a unified framework, new conforming and nonconforming virtual element methods for general second-order elliptic problems in two and three dimensions. The differential operator is split into its symmetric and nonsymmetric parts and conditions for stability and accuracy on their discrete counterparts are established. These conditions are shown to lead to optimal H 1- and L 2-error estimates, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the two methods is shown to be comparable.

  9. Semantic Network Adaptation Based on QoS Pattern Recognition for Multimedia Streams

    NASA Astrophysics Data System (ADS)

    Exposito, Ernesto; Gineste, Mathieu; Lamolle, Myriam; Gomez, Jorge

    This article proposes an ontology based pattern recognition methodology to compute and represent common QoS properties of the Application Data Units (ADU) of multimedia streams. The use of this ontology by mechanisms located at different layers of the communication architecture will allow implementing fine per-packet self-optimization of communication services regarding the actual application requirements. A case study showing how this methodology is used by error control mechanisms in the context of wireless networks is presented in order to demonstrate the feasibility and advantages of this approach.

  10. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    PubMed

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  11. Analyses of global sea surface temperature 1856-1991

    NASA Astrophysics Data System (ADS)

    Kaplan, Alexey; Cane, Mark A.; Kushnir, Yochanan; Clement, Amy C.; Blumenthal, M. Benno; Rajagopalan, Balaji

    1998-08-01

    Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kaiman filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the timemarching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) [Parker et al., 1994] of the Global Ocean Surface Temperature Atlas (GOSTA) [Bottomley et al., 1990]. These analyses are compared with each other, with GOSTA, and with an analysis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. [1996]). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and GOSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and GOSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the El Niño-Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.

  12. An adaptive reentry guidance method considering the influence of blackout zone

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Yao, Jianyao; Qu, Xiangju

    2018-01-01

    Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.

  13. An estimator-predictor approach to PLL loop filter design

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Hurd, W. J.

    1986-01-01

    An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.

  14. Air-gun signature modelling considering the influence of mechanical structure factors

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Liu, Zhao; Wang, Jianhua; Cao, Mingqiang

    2014-04-01

    In marine seismic prospecting, as the air-gun array is usually composed of different types of air-guns, the signature modelling of different air-guns is particularly important to the array design. Different types of air-guns have different mechanical structures, which directly or indirectly affect the signatures. In order to simulate the influence of the mechanical structure, five parameters—the throttling constant, throttling power law exponent, mass release efficiency, fluid viscosity and heat transfer coefficient—are used in signature modelling. Through minimizing the energy relative error between the simulated and the measured signatures by the simulated annealing method, the five optimal parameters can be estimated. The method is tested in a field experiment, and the consistency between the simulated and the measured signatures is improved with the optimal parameters.

  15. Impact of random pointing and tracking errors on the design of coherent and incoherent optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, Chien-Chung; Gardner, Chester S.

    1989-01-01

    Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.

  16. Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing

    2017-12-01

    The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.

  17. Auxiliary basis sets for density-fitting second-order Møller-Plesset perturbation theory: weighted core-valence correlation consistent basis sets for the 4d elements Y-Pd.

    PubMed

    Hill, J Grant

    2013-09-30

    Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit. Copyright © 2013 Wiley Periodicals, Inc.

  18. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  19. Optimization of the moving-bed biofilm sequencing batch reactor (MBSBR) to control aeration time by kinetic computational modeling: Simulated sugar-industry wastewater treatment.

    PubMed

    Faridnasr, Maryam; Ghanbari, Bastam; Sassani, Ardavan

    2016-05-01

    A novel approach was applied for optimization of a moving-bed biofilm sequencing batch reactor (MBSBR) to treat sugar-industry wastewater (BOD5=500-2500 and COD=750-3750 mg/L) at 2-4 h of cycle time (CT). Although the experimental data showed that MBSBR reached high BOD5 and COD removal performances, it failed to achieve the standard limits at the mentioned CTs. Thus, optimization of the reactor was rendered by kinetic computational modeling and using statistical error indicator normalized root mean square error (NRMSE). The results of NRMSE revealed that Stover-Kincannon (error=6.40%) and Grau (error=6.15%) models provide better fits to the experimental data and may be used for CT optimization in the reactor. The models predicted required CTs of 4.5, 6.5, 7 and 7.5 h for effluent standardization of 500, 1000, 1500 and 2500 mg/L influent BOD5 concentrations, respectively. Similar pattern of the experimental data also confirmed these findings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Minimizing finite-volume discretization errors on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian

    2017-11-01

    Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.

  1. SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montero, A Barragan; Sterpin, E; Lee, J

    Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less

  2. Optical ground station optimization for future optical geostationary satellite feeder uplinks

    NASA Astrophysics Data System (ADS)

    Camboulives, A.-R.; Velluet, M.-T.; Poulenard, S.; Saint-Antonin, L.; Michau, V.

    2017-02-01

    An optical link based on a multiplex of wavelengths at 1:55 μm is foreseen to be a valuable alternative to the conventional radio-frequencies for the feeder link of the next-generation of high throughput geostationary satellite. Considering the limited power of lasers envisioned for feeder links, the beam divergence has to be dramatically reduced. Consequently, the beam pointing becomes a key issue. During its propagation between the ground station and a geostationary satellite, the optical beam is deflected (beam wandering), and possibly distorted (beam spreading), by atmospheric turbulence. It induces strong fluctuations of the detected telecom signal, thus increasing the bit error rate (BER). A steering mirror using a measurement from a beam coming from the satellite is used to pre-compensate the deflection. Because of the point-ahead angle between the downlink and the uplink, the turbulence effects experienced by both beams are slightly different, inducing an error in the correction. This error is characterized as a function of the turbulence characteristics as well as of the terminal characteristics, such as the servo-loop bandwidth or the beam diameter, and is included in the link budget. From this result, it is possible to predict intensity fluctuations detected by the satellite statistically (mean intensity, scintillation index, probability of fade, etc.)). The final objective is to optimize the different parameters of an optical ground station capable of mitigating the impact of atmospheric turbulence on the uplink in order to be compliant with the targeted capacity (1Terabit/s by 2025).

  3. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  4. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  5. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  6. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less

  7. A High Order Discontinuous Galerkin Method for 2D Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Liu, Jia-Guo; Shu, Chi-Wang

    1999-01-01

    In this paper we introduce a high order discontinuous Galerkin method for two dimensional incompressible flow in vorticity streamfunction formulation. The momentum equation is treated explicitly, utilizing the efficiency of the discontinuous Galerkin method The streamfunction is obtained by a standard Poisson solver using continuous finite elements. There is a natural matching between these two finite element spaces, since the normal component of the velocity field is continuous across element boundaries. This allows for a correct upwinding gluing in the discontinuous Galerkin framework, while still maintaining total energy conservation with no numerical dissipation and total enstrophy stability The method is suitable for inviscid or high Reynolds number flows. Optimal error estimates are proven and verified by numerical experiments.

  8. The nonconforming virtual element method for eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardini, Francesca; Manzini, Gianmarco; Vacca, Giuseppe

    We analyse the nonconforming Virtual Element Method (VEM) for the approximation of elliptic eigenvalue problems. The nonconforming VEM allow to treat in the same formulation the two- and three-dimensional case.We present two possible formulations of the discrete problem, derived respectively by the nonstabilized and stabilized approximation of the L 2-inner product, and we study the convergence properties of the corresponding discrete eigenvalue problems. The proposed schemes provide a correct approximation of the spectrum and we prove optimal-order error estimates for the eigenfunctions and the usual double order of convergence of the eigenvalues. Finally we show a large set of numericalmore » tests supporting the theoretical results, including a comparison with the conforming Virtual Element choice.« less

  9. A second-order 3D electromagnetics algorithm for curved interfaces between anisotropic dielectrics on a Yee mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.

    A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less

  10. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  11. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  12. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  13. Practical synchronization on complex dynamical networks via optimal pinning control

    NASA Astrophysics Data System (ADS)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  14. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  15. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  16. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    NASA Astrophysics Data System (ADS)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  17. Accelerating Convergence in Molecular Dynamics Simulations of Solutes in Lipid Membranes by Conducting a Random Walk along the Bilayer Normal.

    PubMed

    Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis

    2013-08-13

    All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.

  18. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  19. Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization

    PubMed Central

    Spielman, Seth E.; Folch, David C.

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. PMID:25723176

  20. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  1. Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles.

    PubMed

    Lu, Xiaodong; Xie, Yuting; Zhou, Jun

    2018-05-27

    Inspired by the problem that the current spatial registration methods are unsuitable for three-dimensional (3-D) sensor on high-dynamic platform, this paper focuses on the estimation for the registration errors of cooperative missiles and motion states of maneuvering target. There are two types of errors being discussed: sensor measurement biases and attitude biases. Firstly, an improved Kalman Filter on Earth-Centered Earth-Fixed (ECEF-KF) coordinate algorithm is proposed to estimate the deviations mentioned above, from which the outcomes are furtherly compensated to the error terms. Secondly, the Pseudo Linear Kalman Filter (PLKF) and the nonlinear scheme the Unscented Kalman Filter (UKF) with modified inputs are employed for target tracking. The convergence of filtering results are monitored by a position-judgement logic, and a low-pass first order filter is selectively introduced before compensation to inhibit the jitter of estimations. In the simulation, the ECEF-KF enhancement is proven to improve the accuracy and robustness of the space alignment, while the conditional-compensation-based PLKF method is demonstrated to be the optimal performance in target tracking.

  2. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  3. Reducing uncertainty in the american community survey through data-driven regionalization.

    PubMed

    Spielman, Seth E; Folch, David C

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold.

  4. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  5. Quantum Optimization of Fully Connected Spin Glasses

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim

    2015-07-01

    Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.

  6. One-year eye-to-eye comparison of wavefront-guided versus wavefront-optimized laser in situ keratomileusis in hyperopes

    PubMed Central

    Sáles, Christopher S; Manche, Edward E

    2014-01-01

    Background To compare wavefront (WF)-guided and WF-optimized laser in situ keratomileusis (LASIK) in hyperopes with respect to the parameters of safety, efficacy, predictability, refractive error, uncorrected distance visual acuity, corrected distance visual acuity, contrast sensitivity, and higher order aberrations. Methods Twenty-two eyes of eleven participants with hyperopia with or without astigmatism were prospectively randomized to receive WF-guided LASIK with the VISX CustomVue S4 IR or WF-optimized LASIK with the WaveLight Allegretto Eye-Q 400 Hz. LASIK flaps were created using the 150-kHz IntraLase iFS. Evaluations included measurement of uncorrected distance visual acuity, corrected distance visual acuity, <5% and <25% contrast sensitivity, and WF aberrometry. Patients also completed a questionnaire detailing symptoms on a quantitative grading scale. Results There were no statistically significant differences between the groups for any of the variables studied after 12 months of follow-up (all P>0.05). Conclusion This comparative case series of 11 subjects with hyperopia showed that WF-guided and WF-optimized LASIK had similar clinical outcomes at 12 months. PMID:25419115

  7. Contact-force distribution optimization and control for quadruped robots using both gradient and adaptive neural networks.

    PubMed

    Li, Zhijun; Ge, Shuzhi Sam; Liu, Sibang

    2014-08-01

    This paper investigates optimal feet forces' distribution and control of quadruped robots under external disturbance forces. First, we formulate a constrained dynamics of quadruped robots and derive a reduced-order dynamical model of motion/force. Consider an external wrench on quadruped robots; the distribution of required forces and moments on the supporting legs of a quadruped robot is handled as a tip-point force distribution and used to equilibrate the external wrench. Then, a gradient neural network is adopted to deal with the optimized objective function formulated as to minimize this quadratic objective function subjected to linear equality and inequality constraints. For the obtained optimized tip-point force and the motion of legs, we propose the hybrid motion/force control based on an adaptive neural network to compensate for the perturbations in the environment and approximate feedforward force and impedance of the leg joints. The proposed control can confront the uncertainties including approximation error and external perturbation. The verification of the proposed control is conducted using a simulation.

  8. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  9. When is an error not a prediction error? An electrophysiological investigation.

    PubMed

    Holroyd, Clay B; Krigolson, Olave E; Baker, Robert; Lee, Seung; Gibson, Jessica

    2009-03-01

    A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals conveyed by the midbrain dopamine system to facilitate flexible action selection. According to this position, the impact of reward prediction error signals on ACC modulates the amplitude of a component of the event-related brain potential called the error-related negativity (ERN). The theory predicts that ERN amplitude is monotonically related to the expectedness of the event: It is larger for unexpected outcomes than for expected outcomes. However, a recent failure to confirm this prediction has called the theory into question. In the present article, we investigated this discrepancy in three trial-and-error learning experiments. All three experiments provided support for the theory, but the effect sizes were largest when an optimal response strategy could actually be learned. This observation suggests that ACC utilizes dopamine reward prediction error signals for adaptive decision making when the optimal behavior is, in fact, learnable.

  10. The 4th order GISS model of the global atmosphere

    NASA Technical Reports Server (NTRS)

    Kalnay-Rivas, E.; Bayliss, A.; Storch, J.

    1977-01-01

    The new GISS 4th order model of the global atmosphere is described. It is based on 4th order quadratically conservative differences with the periodic application of a 16th order filter on the sea level pressure and potential temperature equations, a combination which is approximately enstrophy conserving. Several short range forecasts indicate a significant improvement over 2nd order forecasts with the same resolution (approximately 400 km). However the 4th order forecasts are somewhat inferior to 2nd order forecasts with double resolution. This is probably due to the presence of short waves in the range between 1000 km and 2000 km, which are computed more accurately by the 2nd order high resolution model. An operation count of the schemes indicates that with similar code optimization, the 4th order model will require approximately the same amount of computer time as the 2nd order model with the same resolution. It is estimated that the 4th order model with a grid size of 200 km provides enough accuracy to make horizontal truncation errors negligible over a period of a week for all synoptic scales (waves longer than 1000 km).

  11. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  12. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  13. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  14. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  15. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  16. Extracting motor synergies from random movements for low-dimensional task-space control of musculoskeletal robots.

    PubMed

    Fu, Kin Chung Denny; Dalla Libera, Fabio; Ishiguro, Hiroshi

    2015-10-08

    In the field of human motor control, the motor synergy hypothesis explains how humans simplify body control dimensionality by coordinating groups of muscles, called motor synergies, instead of controlling muscles independently. In most applications of motor synergies to low-dimensional control in robotics, motor synergies are extracted from given optimal control signals. In this paper, we address the problems of how to extract motor synergies without optimal data given, and how to apply motor synergies to achieve low-dimensional task-space tracking control of a human-like robotic arm actuated by redundant muscles, without prior knowledge of the robot. We propose to extract motor synergies from a subset of randomly generated reaching-like movement data. The essence is to first approximate the corresponding optimal control signals, using estimations of the robot's forward dynamics, and to extract the motor synergies subsequently. In order to avoid modeling difficulties, a learning-based control approach is adopted such that control is accomplished via estimations of the robot's inverse dynamics. We present a kernel-based regression formulation to estimate the forward and the inverse dynamics, and a sliding controller in order to cope with estimation error. Numerical evaluations show that the proposed method enables extraction of motor synergies for low-dimensional task-space control.

  17. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  18. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  19. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  20. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  1. Performance of Optimization Heuristics for the Operational Planning of Multi-energy Storage Systems

    NASA Astrophysics Data System (ADS)

    Haas, J.; Schradi, J.; Nowak, W.

    2016-12-01

    In the transition to low-carbon energy sources, energy storage systems (ESS) will play an increasingly important role. Particularly in the context of solar power challenges (variability, uncertainty), ESS can provide valuable services: energy shifting, ramping, robustness against forecast errors, frequency support, etc. However, these qualities are rarely modelled in the operational planning of power systems because of the involved computational burden, especially when multiple ESS technologies are involved. This work assesses two optimization heuristics for speeding up the optimal operation problem. It compares their accuracy (in terms of costs) and speed against a reference solution. The first heuristic (H1) is based on a merit order. Here, the ESS are sorted from lower to higher operational costs (including cycling costs). For each time step, the cheapest available ESS is used first, followed by the second one and so on, until matching the net load (demand minus available renewable generation). The second heuristic (H2) uses the Fourier transform to detect the main frequencies that compose the net load. A specific ESS is assigned to each frequency range, aiming to smoothen the net load. Finally, the reference solution is obtained with a mixed integer linear program (MILP). H1, H2 and MILP are subject to technical constraints (energy/power balance, ramping rates, on/off states...). Costs due to operation, replacement (cycling) and unserved energy are considered. Four typical days of a system with a high share of solar energy were used in several test cases, varying the resolution from one second to fifteen minutes. H1 and H2 achieve accuracies of about 90% and 95% in average, and speed-up times of two to three and one to two orders of magnitude, respectively. The use of the heuristics looks promising in the context of planning the expansion of power systems, especially when their loss of accuracy is outweighed by solar or wind forecast errors.

  2. Optimized auxiliary basis sets for density fitted post-Hartree-Fock calculations of lanthanide containing molecules

    NASA Astrophysics Data System (ADS)

    Chmela, Jiří; Harding, Michael E.

    2018-06-01

    Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.

  3. Blind system identification of two-thermocouple sensor based on cross-relation method.

    PubMed

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  4. Optimize of shrink process with X-Y CD bias on hole pattern

    NASA Astrophysics Data System (ADS)

    Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami

    2017-03-01

    Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.

  5. Blind system identification of two-thermocouple sensor based on cross-relation method

    NASA Astrophysics Data System (ADS)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  6. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  7. Optimal decoding in fading channels - A combined envelope, multiple differential and coherent detection approach

    NASA Astrophysics Data System (ADS)

    Makrakis, Dimitrios; Mathiopoulos, P. Takis

    A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.

  8. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    PubMed

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy. © 2017 American Association of Physicists in Medicine.

  9. Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.

    2013-01-01

    The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.

  10. X-ray optics simulation and beamline design for the APS upgrade

    NASA Astrophysics Data System (ADS)

    Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean

    2017-08-01

    The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.

  11. Optimal economic order quantity for buyer-distributor-vendor supply chain with backlogging derived without derivatives

    NASA Astrophysics Data System (ADS)

    Teng, Jinn-Tsair; Cárdenas-Barrón, Leopoldo Eduardo; Lou, Kuo-Ren; Wee, Hui Ming

    2013-05-01

    In this article, we first complement an inappropriate mathematical error on the total cost in the previously published paper by Chung and Wee [2007, 'Optimal the Economic Lot Size of a Three-stage Supply Chain With Backlogging Derived Without Derivatives', European Journal of Operational Research, 183, 933-943] related to buyer-distributor-vendor three-stage supply chain with backlogging derived without derivatives. Then, an arithmetic-geometric inequality method is proposed not only to simplify the algebraic method of completing prefect squares, but also to complement their shortcomings. In addition, we provide a closed-form solution to integral number of deliveries for the distributor and the vendor without using complex derivatives. Furthermore, our method can solve many cases in which their method cannot, because they did not consider that a squared root of a negative number does not exist. Finally, we use some numerical examples to show that our proposed optimal solution is cheaper to operate than theirs.

  12. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  13. Modeling and Predicting the Electrical Conductivity of Composite Cathode for Solid Oxide Fuel Cell by Using Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Tang, J. L.; Cai, C. Z.; Xiao, T. T.; Huang, S. J.

    2012-07-01

    The electrical conductivity of solid oxide fuel cell (SOFC) cathode is one of the most important indices affecting the efficiency of SOFC. In order to improve the performance of fuel cell system, it is advantageous to have accurate model with which one can predict the electrical conductivity. In this paper, a model utilizing support vector regression (SVR) approach combined with particle swarm optimization (PSO) algorithm for its parameter optimization was established to modeling and predicting the electrical conductivity of Ba0.5Sr0.5Co0.8Fe0.2 O3-δ-xSm0.5Sr0.5CoO3-δ (BSCF-xSSC) composite cathode under two influence factors, including operating temperature (T) and SSC content (x) in BSCF-xSSC composite cathode. The leave-one-out cross validation (LOOCV) test result by SVR strongly supports that the generalization ability of SVR model is high enough. The absolute percentage error (APE) of 27 samples does not exceed 0.05%. The mean absolute percentage error (MAPE) of all 30 samples is only 0.09% and the correlation coefficient (R2) as high as 0.999. This investigation suggests that the hybrid PSO-SVR approach may be not only a promising and practical methodology to simulate the properties of fuel cell system, but also a powerful tool to be used for optimal designing or controlling the operating process of a SOFC system.

  14. Acoustic centering of sources measured by surrounding spherical microphone arrays.

    PubMed

    Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz

    2011-10-01

    The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America

  15. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  16. Optimization of Aimpoints for Coordinate Seeking Weapons

    DTIC Science & Technology

    2015-09-01

    aiming) and independent ( ballistic ) errors are taken into account, before utilizing each of the three damage functions representing the weapon. A Monte...characteristics such as the radius of the circle containing the weapon aimpoint, impact angle, dependent (aiming) and independent ( ballistic ) errors are taken...Dependent (Aiming) Error .................................8 2. Single Weapon Independent ( Ballistic ) Error .............................9 3

  17. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  18. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  19. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  20. Modeling and Development of INS-Aided PLLs in a GNSS/INS Deeply-Coupled Hardware Prototype for Dynamic Applications

    PubMed Central

    Zhang, Tisheng; Niu, Xiaoji; Ban, Yalong; Zhang, Hongping; Shi, Chuang; Liu, Jingnan

    2015-01-01

    A GNSS/INS deeply-coupled system can improve the satellite signals tracking performance by INS aiding tracking loops under dynamics. However, there was no literature available on the complete modeling of the INS branch in the INS-aided tracking loop, which caused the lack of a theoretical tool to guide the selections of inertial sensors, parameter optimization and quantitative analysis of INS-aided PLLs. This paper makes an effort on the INS branch in modeling and parameter optimization of phase-locked loops (PLLs) based on the scalar-based GNSS/INS deeply-coupled system. It establishes the transfer function between all known error sources and the PLL tracking error, which can be used to quantitatively evaluate the candidate inertial measurement unit (IMU) affecting the carrier phase tracking error. Based on that, a steady-state error model is proposed to design INS-aided PLLs and to analyze their tracking performance. Based on the modeling and error analysis, an integrated deeply-coupled hardware prototype is developed, with the optimization of the aiding information. Finally, the performance of the INS-aided PLLs designed based on the proposed steady-state error model is evaluated through the simulation and road tests of the hardware prototype. PMID:25569751

Top