Sample records for optimal sampling schemes

  1. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  2. Optimal updating magnitude in adaptive flat-distribution sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery

    2017-11-01

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  3. Optimal updating magnitude in adaptive flat-distribution sampling.

    PubMed

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  4. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  5. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    PubMed

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  6. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    NASA Technical Reports Server (NTRS)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  7. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  8. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. [Study on optimal model of hypothetical work injury insurance scheme].

    PubMed

    Ye, Chi-yu; Dong, Heng-jin; Wu, Yuan; Duan, Sheng-nan; Liu, Xiao-fang; You, Hua; Hu, Hui-mei; Wang, Lin-hao; Zhang, Xing; Wang, Jing

    2013-12-01

    To explore an optimal model of hypothetical work injury insurance scheme, which is in line with the wishes of workers, based on the problems in the implementation of work injury insurance in China and to provide useful information for relevant policy makers. Multistage cluster sampling was used to select subjects: first, 9 small, medium, and large enterprises were selected from three cities (counties) in Zhejiang Province, China according to the economic development, transportation, and cooperation; then, 31 workshops were randomly selected from the 9 enterprises. Face-to-face interviews were conducted by trained interviewers using a pre-designed questionnaire among all workers in the 31 workshops. After optimization of hypothetical work injury insurance scheme, the willingness to participate in the scheme increased from 73.87%to 80.96%; the average willingness to pay for the scheme increased from 2.21% (51.77 yuan) to 2.38% of monthly wage (54.93 Yuan); the median willingness to pay for the scheme increased from 1% to 1.2% of monthly wage, but decreased from 35 yuan to 30 yuan. The optimal model of hypothetical work injury insurance scheme covers all national and provincial statutory occupational diseases and work accidents, as well as consultations about occupational diseases. The scheme is supposed to be implemented worldwide by the National Social Security Department, without regional differences. The premium is borne by the state, enterprises, and individuals, and an independent insurance fund is kept in the lifetime personal account for each of insured individuals. The premium is not refunded in any event. Compensation for occupational diseases or work accidents is unrelated to the enterprises of the insured workers but related to the length of insurance. The insurance becomes effective one year after enrollment, while it is put into effect immediately after the occupational disease or accident occurs. The optimal model of hypothetical work injury insurance

  10. Why sampling scheme matters: the effect of sampling scheme on landscape genetic results

    Treesearch

    Michael K. Schwartz; Kevin S. McKelvey

    2008-01-01

    There has been a recent trend in genetic studies of wild populations where researchers have changed their sampling schemes from sampling pre-defined populations to sampling individuals uniformly across landscapes. This reflects the fact that many species under study are continuously distributed rather than clumped into obvious "populations". Once individual...

  11. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  12. An Improved Transformation and Optimized Sampling Scheme for the Numerical Evaluation of Singular and Near-Singular Potentials

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.

    2007-01-01

    Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.

  13. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  14. Optimization of a middle atmosphere diagnostic scheme

    NASA Astrophysics Data System (ADS)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  15. Rapid Parameterization Schemes for Aircraft Shape Optimization

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2012-01-01

    A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.

  16. Receiver-Coupling Schemes Based On Optimal-Estimation Theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.

  17. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  19. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  20. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    NASA Astrophysics Data System (ADS)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  1. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    NASA Astrophysics Data System (ADS)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  2. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    DTIC Science & Technology

    2016-01-01

    1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...described and validated experimentally with the formation of high quality microwave images. It is further shown that the scheme is more than two orders of... scheme (wherein transmitters and receivers are co-located) which require NTNR transmit-receive elements to achieve the same sampling. The second

  3. An Optimized Handover Scheme with Movement Trend Awareness for Body Sensor Networks

    PubMed Central

    Sun, Wen; Zhang, Zhiqiang; Ji, Lianying; Wong, Wai-Choong

    2013-01-01

    When a body sensor network (BSN) that is linked to the backbone via a wireless network interface moves from one coverage zone to another, a handover is required to maintain network connectivity. This paper presents an optimized handover scheme with movement trend awareness for BSNs. The proposed scheme predicts the future position of a BSN user using the movement trend extracted from the historical position, and adjusts the handover decision accordingly. Handover initiation time is optimized when the unnecessary handover rate is estimated to meet the requirement and the outage probability is minimized. The proposed handover scheme is simulated in a BSN deployment area in a hospital environment in UK. Simulation results show that the proposed scheme reduces the outage probability by 22% as compared with the existing hysteresis-based handover scheme under the constraint of acceptable handover rate. PMID:23736852

  4. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  5. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  6. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  7. Optimization research of railway passenger transfer scheme based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Xiang

    2018-05-01

    The optimization research of railway passenger transfer scheme can provide strong support for railway passenger transport system, and its essence is path search. This paper realized the calculation of passenger transfer scheme for high speed railway when giving the time and stations of departure and arrival. The specific method that used were generating a passenger transfer service network of high-speed railway, establishing optimization model and searching by Ant Colony Algorithm. Finally, making analysis on the scheme from LanZhouxi to BeiJingXi which were based on high-speed railway network of China in 2017. The results showed that the transfer network and model had relatively high practical value and operation efficiency.

  8. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    PubMed Central

    Robinson, Y. Harold; Rajaram, M.

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966

  9. A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.

    PubMed

    Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu

    2017-12-01

    Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.

  10. Quantum Optimal Multiple Assignment Scheme for Realizing General Access Structure of Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    The multiple assignment scheme is to assign one or more shares to single participant so that any kind of access structure can be realized by classical secret sharing schemes. We propose its quantum version including ramp secret sharing schemes. Then we propose an integer optimization approach to minimize the average share size.

  11. The optimization on flow scheme of helium liquefier with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.

    2017-01-01

    There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.

  12. Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Qianqian; Blohm, Andrew; Liu, Bo

    A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoffmore » control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.« less

  13. Optimization of the two-sample rank Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  14. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  15. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  16. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  17. Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield

    Treesearch

    Robert B. Thomas

    1986-01-01

    Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...

  18. Salmonella enteritidis surveillance by egg immunology: impact of the sampling scheme on the release of contaminated table eggs.

    PubMed

    Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie

    2011-08-01

    Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.

  19. Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.

    2017-12-01

    Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error

  20. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  1. Optimizing Scheme for Remote Preparation of Four-particle Cluster-like Entangled States

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Ye, Liu

    2011-09-01

    Recently, Ma et al. (Opt. Commun. 283:2640, 2010) have proposed a novel scheme for preparing a class of cluster-like entangled states based on a four-particle projective measurement. In this paper, we put forward a new and optimal scheme to realize the remote preparation for this class of cluster-like states with the aid of two bipartite partially entangled channels. Different from the previous scheme, we employ a two-particle projective measurement instead of the four-particle projective measurement during the preparation. Besides, the resource consumptions are computed in our scheme, which include classical communication cost and quantum resource consumptions. Moreover, we have some discussions on the features of our scheme and make some comparisons on resource consumptions and operation complexity between the previous scheme and ours. The results show that our scheme is more economic and feasible compared with the previous.

  2. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    NASA Astrophysics Data System (ADS)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  3. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  4. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  5. The Impact of Sampling Schemes on the Site Frequency Spectrum in Nonequilibrium Subdivided Populations

    PubMed Central

    Städler, Thomas; Haubold, Bernhard; Merino, Carlos; Stephan, Wolfgang; Pfaffelhuber, Peter

    2009-01-01

    Using coalescent simulations, we study the impact of three different sampling schemes on patterns of neutral diversity in structured populations. Specifically, we are interested in two summary statistics based on the site frequency spectrum as a function of migration rate, demographic history of the entire substructured population (including timing and magnitude of specieswide expansions), and the sampling scheme. Using simulations implementing both finite-island and two-dimensional stepping-stone spatial structure, we demonstrate strong effects of the sampling scheme on Tajima's D (DT) and Fu and Li's D (DFL) statistics, particularly under specieswide (range) expansions. Pooled samples yield average DT and DFL values that are generally intermediate between those of local and scattered samples. Local samples (and to a lesser extent, pooled samples) are influenced by local, rapid coalescence events in the underlying coalescent process. These processes result in lower proportions of external branch lengths and hence lower proportions of singletons, explaining our finding that the sampling scheme affects DFL more than it does DT. Under specieswide expansion scenarios, these effects of spatial sampling may persist up to very high levels of gene flow (Nm > 25), implying that local samples cannot be regarded as being drawn from a panmictic population. Importantly, many data sets on humans, Drosophila, and plants contain signatures of specieswide expansions and effects of sampling scheme that are predicted by our simulation results. This suggests that validating the assumption of panmixia is crucial if robust demographic inferences are to be made from local or pooled samples. However, future studies should consider adopting a framework that explicitly accounts for the genealogical effects of population subdivision and empirical sampling schemes. PMID:19237689

  6. Finite volume treatment of dispersion-relation-preserving and optimized prefactored compact schemes for wave propagation

    NASA Astrophysics Data System (ADS)

    Popescu, Mihaela; Shyy, Wei; Garbey, Marc

    2005-12-01

    In developing suitable numerical techniques for computational aero-acoustics, the dispersion-relation-preserving (DRP) scheme by Tam and co-workers and the optimized prefactored compact (OPC) scheme by Ashcroft and Zhang have shown desirable properties of reducing both dissipative and dispersive errors. These schemes, originally based on the finite difference, attempt to optimize the coefficients for better resolution of short waves with respect to the computational grid while maintaining pre-determined formal orders of accuracy. In the present study, finite volume formulations of both schemes are presented to better handle the nonlinearity and complex geometry encountered in many engineering applications. Linear and nonlinear wave equations, with and without viscous dissipation, have been adopted as the test problems. Highlighting the principal characteristics of the schemes and utilizing linear and nonlinear wave equations with different wavelengths as the test cases, the performance of these approaches is documented. For the linear wave equation, there is no major difference between the DRP and OPC schemes. For the nonlinear wave equations, the finite volume version of both DRP and OPC schemes offers substantially better solutions in regions of high gradient or discontinuity.

  7. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  8. 40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Interpreting PCB concentration measurements resulting from this sampling scheme. 761.316 Section 761.316 Protection of Environment... scheme. (a) For an individual sample taken from an approximately 1 meter square portion of the entire...

  9. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    PubMed

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  10. Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI

    PubMed Central

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2017-01-01

    Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329

  11. Probing the microscopic environment of 23 Na ions in brain tissue by MRI: On the accuracy of different sampling schemes for the determination of rapid, biexponential T2* decay at low signal-to-noise ratio.

    PubMed

    Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M

    2018-08-01

    To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Effect of different sampling schemes on the spatial placement of conservation reserves in Utah, USA

    USGS Publications Warehouse

    Bassett, S.D.; Edwards, T.C.

    2003-01-01

    We evaluated the effect of three different sampling schemes used to organize spatially explicit biological information had on the spatial placement of conservation reserves in Utah, USA. The three sampling schemes consisted of a hexagon representation developed by the EPA/EMAP program (statistical basis), watershed boundaries (ecological), and the current county boundaries of Utah (socio-political). Four decision criteria were used to estimate effects, including amount of area, length of edge, lowest number of contiguous reserves, and greatest number of terrestrial vertebrate species covered. A fifth evaluation criterion was the effect each sampling scheme had on the ability of the modeled conservation reserves to cover the six major ecoregions found in Utah. Of the three sampling schemes, county boundaries covered the greatest number of species, but also created the longest length of edge and greatest number of reserves. Watersheds maximized species coverage using the least amount of area. Hexagons and watersheds provide the least amount of edge and fewest number of reserves. Although there were differences in area, edge and number of reserves among the sampling schemes, all three schemes covered all the major ecoregions in Utah and their inclusive biodiversity. ?? 2003 Elsevier Science Ltd. All rights reserved.

  13. A new sampling scheme for tropical forest monitoring using satellite imagery

    Treesearch

    Frederic Achard; Tim Richards; Javier Gallego

    2000-01-01

    At the global level, a sampling scheme for tropical forest change assessment, using high resolution satellite images, has been defined using sampling units independent of any particular satellite sensor. For this purpose, a sampling frame has been chosen a hexagonal tessellation of 3,600 km².

  14. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    PubMed

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  15. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.

    PubMed

    Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D

    2006-08-01

    A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.

  16. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  17. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  18. Parametric optimization of optical signal detectors employing the direct photodetection scheme

    NASA Astrophysics Data System (ADS)

    Kirakosiants, V. E.; Loginov, V. A.

    1984-08-01

    The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.

  19. Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array

    NASA Astrophysics Data System (ADS)

    Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-04-01

    Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.

  20. A two-objective optimization scheme for high-OSNR and low-power-consuming all-optical networks

    NASA Astrophysics Data System (ADS)

    Abedifar, Vahid; Mirjalili, Seyed Mohammad; Eshghi, Mohammad

    2015-01-01

    In all-optical networks the ASE noise of the utilized optical power amplifiers is a major impairment, making the OSNR to be the dominant parameter in QoS. In this paper, a two-objective optimization scheme using Multi-Objective Particle Swarm Optimization (MOPSO) is proposed to reach the maximum OSNR for all channels while the optical power consumed by EDFAs and lasers is minimized. Two scenarios are investigated: Scenario 1 and Scenario 2. The former scenario optimizes the gain values of a predefined number of EDFAs in physical links. The gain values may be different from each other. The latter scenario optimizes the gains value of EDFAs (which is supposed to be identical in each physical link) in addition to the number of EDFAs for each physical link. In both scenarios, the launch powers of the lasers are also taken into account during optimization process. Two novel encoding methods are proposed to uniquely represent the problem solutions. Two virtual demand sets are considered for evaluation of the performance of the proposed optimization scheme. The simulations results are described for both scenarios and both virtual demands.

  1. New optimization scheme to obtain interaction potentials for oxide glasses

    NASA Astrophysics Data System (ADS)

    Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter

    2018-05-01

    We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.

  2. A two‐point scheme for optimal breast IMRT treatment planning

    PubMed Central

    2013-01-01

    We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291

  3. Optimizing congestion and emissions via tradable credit charge and reward scheme without initial credit allocations

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang

    2017-01-01

    This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.

  4. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    NASA Astrophysics Data System (ADS)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  5. Implementation of a cryo-electron tomography tilt-scheme optimized for high resolution subtomogram averaging.

    PubMed

    Hagen, Wim J H; Wan, William; Briggs, John A G

    2017-02-01

    Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Optimal Interpolation scheme to generate reference crop evapotranspiration

    NASA Astrophysics Data System (ADS)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  7. Optimization study on multiple train formation scheme of urban rail transit

    NASA Astrophysics Data System (ADS)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  8. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  9. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  10. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation.

    PubMed

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  11. Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.

    PubMed

    Yuan, Haidong; Fung, Chi-Hang Fred

    2015-09-11

    Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.

  12. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  13. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    NASA Astrophysics Data System (ADS)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  14. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  15. First-pass myocardial perfusion MRI with reduced subendocardial dark-rim artifact using optimized Cartesian sampling.

    PubMed

    Zhou, Zhengwei; Bi, Xiaoming; Wei, Janet; Yang, Hsin-Jung; Dharmakumar, Rohan; Arsanjani, Reza; Bairey Merz, C Noel; Li, Debiao; Sharif, Behzad

    2017-02-01

    The presence of subendocardial dark-rim artifact (DRA) remains an ongoing challenge in first-pass perfusion (FPP) cardiac magnetic resonance imaging (MRI). We propose a free-breathing FPP imaging scheme with Cartesian sampling that is optimized to minimize the DRA and readily enables near-instantaneous image reconstruction. The proposed FPP method suppresses Gibbs ringing effects-a major underlying factor for the DRA-by "shaping" the underlying point spread function through a two-step process: 1) an undersampled Cartesian sampling scheme that widens the k-space coverage compared to the conventional scheme; and 2) a modified parallel-imaging scheme that incorporates optimized apodization (k-space data filtering) to suppress Gibbs-ringing effects. Healthy volunteer studies (n = 10) were performed to compare the proposed method against the conventional Cartesian technique-both using a saturation-recovery gradient-echo sequence at 3T. Furthermore, FPP imaging studies using the proposed method were performed in infarcted canines (n = 3), and in two symptomatic patients with suspected coronary microvascular dysfunction for assessment of myocardial hypoperfusion. Width of the DRA and the number of DRA-affected myocardial segments were significantly reduced in the proposed method compared to the conventional approach (width: 1.3 vs. 2.9 mm, P < 0.001; number of segments: 2.6 vs. 8.7; P < 0.0001). The number of slices with severe DRA was markedly lower for the proposed method (by 10-fold). The reader-assigned image quality scores were similar (P = 0.2), although the quantified myocardial signal-to-noise ratio was lower for the proposed method (P < 0.05). Animal studies showed that the proposed method can detect subendocardial perfusion defects and patient results were consistent with the gold-standard invasive test. The proposed free-breathing Cartesian FPP imaging method significantly reduces the prevalence of severe DRAs compared to the conventional approach

  16. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  17. Linear triangular optimization technique and pricing scheme in residential energy management systems

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad

    2018-06-01

    This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.

  18. Numerical Solution of Dyson Brownian Motion and a Sampling Scheme for Invariant Matrix Ensembles

    NASA Astrophysics Data System (ADS)

    Li, Xingjie Helen; Menon, Govind

    2013-12-01

    The Dyson Brownian Motion (DBM) describes the stochastic evolution of N points on the line driven by an applied potential, a Coulombic repulsion and identical, independent Brownian forcing at each point. We use an explicit tamed Euler scheme to numerically solve the Dyson Brownian motion and sample the equilibrium measure for non-quadratic potentials. The Coulomb repulsion is too singular for the SDE to satisfy the hypotheses of rigorous convergence proofs for tamed Euler schemes (Hutzenthaler et al. in Ann. Appl. Probab. 22(4):1611-1641, 2012). Nevertheless, in practice the scheme is observed to be stable for time steps of O(1/ N 2) and to relax exponentially fast to the equilibrium measure with a rate constant of O(1) independent of N. Further, this convergence rate appears to improve with N in accordance with O(1/ N) relaxation of local statistics of the Dyson Brownian motion. This allows us to use the Dyson Brownian motion to sample N× N Hermitian matrices from the invariant ensembles. The computational cost of generating M independent samples is O( MN 4) with a naive scheme, and O( MN 3log N) when a fast multipole method is used to evaluate the Coulomb interaction.

  19. Re-evaluation of an Optimized Second Order Backward Difference (BDF2OPT) Scheme for Unsteady Flow Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.

    2009-01-01

    Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.

  20. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Optimal variable flip angle schemes for dynamic acquisition of exchanging hyperpolarized substrates

    NASA Astrophysics Data System (ADS)

    Xing, Yan; Reed, Galen D.; Pauly, John M.; Kerr, Adam B.; Larson, Peder E. Z.

    2013-09-01

    In metabolic MRI with hyperpolarized contrast agents, the signal levels vary over time due to T1 decay, T2 decay following RF excitations, and metabolic conversion. Efficient usage of the nonrenewable hyperpolarized magnetization requires specialized RF pulse schemes. In this work, we introduce two novel variable flip angle schemes for dynamic hyperpolarized MRI in which the flip angle is varied between excitations and between metabolites. These were optimized to distribute the magnetization relatively evenly throughout the acquisition by accounting for T1 decay, prior RF excitations, and metabolic conversion. Simulation results are presented to confirm the flip angle designs and evaluate the variability of signal dynamics across typical ranges of T1 and metabolic conversion. They were implemented using multiband spectral-spatial RF pulses to independently modulate the flip angle at various chemical shift frequencies. With these schemes we observed increased SNR of [1-13C]lactate generated from [1-13C]pyruvate, particularly at later time points. This will allow for improved characterization of tissue perfusion and metabolic profiles in dynamic hyperpolarized MRI.

  2. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  3. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    PubMed

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  4. The effect of sampling scheme in the survey of atmospheric deposition of heavy metals in Albania by using moss biomonitoring.

    PubMed

    Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry

    2015-02-01

    The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.

  5. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  6. Effects of optimized root water uptake parameterization schemes on water and heat flux simulation in a maize agroecosystem

    NASA Astrophysics Data System (ADS)

    Cai, Fu; Ming, Huiqing; Mi, Na; Xie, Yanbing; Zhang, Yushu; Li, Rongping

    2017-04-01

    As root water uptake (RWU) is an important link in the water and heat exchange between plants and ambient air, improving its parameterization is key to enhancing the performance of land surface model simulations. Although different types of RWU functions have been adopted in land surface models, there is no evidence as to which scheme most applicable to maize farmland ecosystems. Based on the 2007-09 data collected at the farmland ecosystem field station in Jinzhou, the RWU function in the Common Land Model (CoLM) was optimized with scheme options in light of factors determining whether roots absorb water from a certain soil layer ( W x ) and whether the baseline cumulative root efficiency required for maximum plant transpiration ( W c ) is reached. The sensibility of the parameters of the optimization scheme was investigated, and then the effects of the optimized RWU function on water and heat flux simulation were evaluated. The results indicate that the model simulation was not sensitive to W x but was significantly impacted by W c . With the original model, soil humidity was somewhat underestimated for precipitation-free days; soil temperature was simulated with obvious interannual and seasonal differences and remarkable underestimations for the maize late-growth stage; and sensible and latent heat fluxes were overestimated and underestimated, respectively, for years with relatively less precipitation, and both were simulated with high accuracy for years with relatively more precipitation. The optimized RWU process resulted in a significant improvement of CoLM's performance in simulating soil humidity, temperature, sensible heat, and latent heat, for dry years. In conclusion, the optimized RWU scheme available for the CoLM model is applicable to the simulation of water and heat flux for maize farmland ecosystems in arid areas.

  7. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in

  8. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    NASA Astrophysics Data System (ADS)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  9. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  10. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  11. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    PubMed Central

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  12. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.

    PubMed

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian

    2018-05-03

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  13. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Rockhill, J; Phillips, M

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without

  14. ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design

    PubMed Central

    Wei, Zhenhua; Peng, Bo; Shen, Rui

    2018-01-01

    Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508

  15. Adaptive control schemes for improving dynamic performance of efficiency-optimized induction motor drives.

    PubMed

    Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P

    2015-07-01

    Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  17. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  18. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  19. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  20. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  1. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  2. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan

    2015-04-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).

  3. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-Hoc Networks

    PubMed Central

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-01-01

    Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218

  4. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks.

    PubMed

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-04-18

    Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  5. Optimal control, investment and utilization schemes for energy storage under uncertainty

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency

  6. Optimizing the loss of one-dimensional photonic crystal towards high-sensitivity Bloch-surface-wave sensors under intensity interrogation scheme

    NASA Astrophysics Data System (ADS)

    Kong, Weijing; Wan, Yuhang; Du, Kun; Zhao, Wenhui; Wang, Shuang; Zheng, Zheng

    2016-11-01

    The reflected intensity change of the Bloch-surface-wave (BSW) resonance influenced by the loss of a truncated onedimensional photonic crystal structure is numerically analyzed and studied in order to enhance the sensitivity of the Bloch-surface-wave-based sensors. The finite truncated one-dimensional photonic crystal structure is designed to be able to excite BSW mode for water (n=1.33) as the external medium and for p-polarized plane wave incident light. The intensity interrogation scheme which can be operated on a typical Kretschmann prism-coupling configuration by measuring the reflected intensity change of the resonance dip is investigated to optimize the sensitivity. A figure of merit (FOM) is introduced to measure the performance of the one-dimensional photonic crystal multilayer structure under the scheme. The detection sensitivities are calculated under different device parameters with a refractive index change corresponding to different solutions of glycerol in de-ionized (DI)-water. The results show that the intensity sensitivity curve varies similarly with the FOM curve and the sensitivity of the Bloch-surface-wave sensor is greatly affected by the device loss, where an optimized loss value can be got. For the low-loss BSW devices, the intensity interrogation sensing sensitivity may drop sharply from the optimal value. On the other hand, the performance of the detection scheme is less affected by the higher device loss. This observation is in accordance with BSW experimental sensing demonstrations as well. The results obtained could be useful for improving the performance of the Bloch-surface-wave sensors for the investigated sensing scheme.

  7. Optimal powering schemes for legged robotics

    NASA Astrophysics Data System (ADS)

    Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.

    2010-04-01

    Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.

  8. Optimal sampling strategies for detecting zoonotic disease epidemics.

    PubMed

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  9. An optimal scheme for top quark mass measurement near the \\rm{t}\\bar{t} threshold at future \\rm{e}^{+}{e}^{-} colliders

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Guo; Wan, Xia; Wang, You-Kai

    2018-05-01

    A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)

  10. An optimal sampling approach to modelling whole-body vibration exposure in all-terrain vehicle driving.

    PubMed

    Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao

    2017-08-01

    Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.

  11. Systolic MOLLI T1 mapping with heart-rate-dependent pulse sequence sampling scheme is feasible in patients with atrial fibrillation.

    PubMed

    Zhao, Lei; Li, Songnan; Ma, Xiaohai; Greiser, Andreas; Zhang, Tianjing; An, Jing; Bai, Rong; Dong, Jianzeng; Fan, Zhanming

    2016-03-15

    T1 mapping enables assessment of myocardial characteristics. As the most common type of arrhythmia, atrial fibrillation (AF) is often accompanied by a variety of cardiac pathologies, whereby the irregular and usually rapid ventricle rate of AF may cause inaccurate T1 estimation due to mis-triggering and inadequate magnetization recovery. We hypothesized that systolic T1 mapping with a heart-rate-dependent (HRD) pulse sequence scheme may overcome this issue. 30 patients with AF and 13 healthy volunteers were enrolled and underwent cardiovascular magnetic resonance (CMR) at 3 T. CMR was repeated for 3 patients after electric cardioversion and for 2 volunteers after lowering heart rate (HR). A Modified Look-Locker Inversion Recovery (MOLLI) sequence was acquired before and 15 min after administration of 0.1 mmol/kg gadopentetate dimeglumine. For AF patients, both the fixed 5(3)3/4(1)3(1)2 and the HRD sampling scheme were performed at diastole and systole, respectively. The HRD pulse sequence sampling scheme was 5(n)3/4(n)3(n)2, where n was determined by the heart rate to ensure adequate magnetization recovery. Image quality of T1 maps was assessed. T1 times were measured in myocardium and blood. Extracellular volume fraction (ECV) was calculated. In volunteers with repeated T1 mapping, the myocardial native T1 and ECV generated from the 1st fixed sampling scheme were smaller than from the 1st HRD and 2nd fixed sampling scheme. In healthy volunteers, the overall native T1 times and ECV of the left ventricle (LV) in diastolic T1 maps were greater than in systolic T1 maps (P < 0.01, P < 0.05). In the 3 AF patients that had received electrical cardioversion therapy, the myocardial native T1 times and ECV generated from the fixed sampling scheme were smaller than in the 1st and 2nd HRD sampling scheme (all P < 0.05). In patients with AF (HR: 88 ± 20 bpm, HR fluctuation: 12 ± 9 bpm), more T1 maps with artifact were found in diastole than in systole (P

  12. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  13. Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes.

    PubMed

    Uen, Tinn-Shuan; Chang, Fi-John; Zhou, Yanlai; Tsai, Wen-Ping

    2018-08-15

    This study proposed a holistic three-fold scheme that synergistically optimizes the benefits of the Water-Food-Energy (WFE) Nexus by integrating the short/long-term joint operation of a multi-objective reservoir with irrigation ponds in response to urbanization. The three-fold scheme was implemented step by step: (1) optimizing short-term (daily scale) reservoir operation for maximizing hydropower output and final reservoir storage during typhoon seasons; (2) simulating long-term (ten-day scale) water shortage rates in consideration of the availability of irrigation ponds for both agricultural and public sectors during non-typhoon seasons; and (3) promoting the synergistic benefits of the WFE Nexus in a year-round perspective by integrating the short-term optimization and long-term simulation of reservoir operations. The pivotal Shihmen Reservoir and 745 irrigation ponds located in Taoyuan City of Taiwan together with the surrounding urban areas formed the study case. The results indicated that the optimal short-term reservoir operation obtained from the non-dominated sorting genetic algorithm II (NSGA-II) could largely increase hydropower output but just slightly affected water supply. The simulation results of the reservoir coupled with irrigation ponds indicated that such joint operation could significantly reduce agricultural and public water shortage rates by 22.2% and 23.7% in average, respectively, as compared to those of reservoir operation excluding irrigation ponds. The results of year-round short/long-term joint operation showed that water shortage rates could be reduced by 10% at most, the food production rate could be increased by up to 47%, and the hydropower benefit could increase up to 9.33 million USD per year, respectively, in a wet year. Consequently, the proposed methodology could be a viable approach to promoting the synergistic benefits of the WFE Nexus, and the results provided unique insights for stakeholders and policymakers to pursue

  14. Knowledge-based nonuniform sampling in multidimensional NMR.

    PubMed

    Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C

    2011-07-01

    The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.

  15. Optimized scheme in coal-fired boiler combustion based on information entropy and modified K-prototypes algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu

    2018-06-01

    An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.

  16. A new configurational bias scheme for sampling supramolecular structures

    NASA Astrophysics Data System (ADS)

    De Gernier, Robin; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.; Mognetti, Bortolo M.

    2014-12-01

    We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand-receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.

  17. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  18. Evolutional Optimization on Material Ordering and Inventory Control of Supply Chain through Incentive Scheme

    NASA Astrophysics Data System (ADS)

    Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee

    This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.

  19. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  20. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  1. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  2. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  3. Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-11-01

    In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.

  4. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  5. Practical scheme for optimal measurement in quantum interferometric devices

    NASA Astrophysics Data System (ADS)

    Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide

    2003-06-01

    We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.

  6. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  7. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    PubMed

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  8. New Imaging Operation Scheme at VLTI

    NASA Astrophysics Data System (ADS)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  9. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    PubMed

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  10. Alternative difference analysis scheme combining R-space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy.

    PubMed

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    2017-07-01

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.

  11. Classifier-Guided Sampling for Complex Energy System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less

  12. Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, J.; Huang, B.; Huang, A. H.-L.

    2014-12-01

    The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations

  13. A classification scheme for risk assessment methods.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for

  14. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less

  15. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  16. Critical evaluation of sample pretreatment techniques.

    PubMed

    Hyötyläinen, Tuulia

    2009-06-01

    Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.

  17. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    PubMed

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  18. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show

  19. An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin

    2016-02-01

    We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.

  20. [Optimization of acupoint application scheme in the treatment of bronchial asthma based on the orthogonal design method].

    PubMed

    Shi, Kuan; Wu, Wenzhong; Liu, Lanying; Wang, Hesheng; Chen, Dong; Liu, Chengyong; Zhang, Cong

    2017-06-12

    To study the primary and secondary factors of the allergic history, the frequency of acupoint application and the time of acupoint application in the treatment of bronchial asthma and optimize its scheme. Eighty patients of bronchial asthma were selected as the subjects in the orthogonal trial. The herbal medicines were the empirical formula of acupoint application (prepared at the ratio as 2:2:1:1:1:1:1:1:1 of semen brassicae , rhizome corydalis , unprocessed radix kansui , asarum sieboldii , ephedra , semen lepidii , syzygium aromaticum , cortex cinnamomi and fructus gleditsiae ) and used on bilateral Feishu (BL 13), Xinshu (BL 15), Geshu (BL 17) and Shenshu (BL 23). Firstly, two groups were divided according to allergic history (40 cases with allergic history and 40 cases without allergic history), and then four subgroups were divided on the basis of the two main groups, 10 cases in each one. Through studying three factors and two levels, i.e. allergic history (Factor A:A Ⅰ :with allergic history; A Ⅱ :without allergic history), the frequency of acupoint application (Factor B:B Ⅰ :4 times; B Ⅱ :10 times, in which, in the group of 4-time applications, the application was given once every 10 days; in the group of 10-time applications, the application was given once every 4 days); and the time of application (Factor C:C Ⅰ :4 h; C Ⅱ :8 h), the optimal scheme was screened on the basis of the attack frequency before and after treatment and the score of the asthma quality life questionnaire (AQLQ) before treatment and 6 months after treatment in the patients of each group. ① The orthogonal trial indicated that the best optimal scheme was A Ⅰ B Ⅱ C Ⅰ , meaning the patients with allergic history were treated with acupoint application for 10 times, remained for 4 h. ②Factor B (frequency of acupoint application) and C (time of acpoint application) were the significant influential factors of AQLQ scores (both P <0.05). ③The comparison of the attack

  1. Online open-tubular fractionation scheme coupled with push-pull perfusion sampling for profiling extravasation of gold nanoparticles in a mouse tumor model.

    PubMed

    Su, Cheng-Kuan; Tseng, Po-Jen; Lin, Meng-Han; Chiu, Hsien-Ting; del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang

    2015-07-10

    The extravasation of administered nano-drug carriers is a critical process for determining their distributions in target and non-target organs, as well as their pharmaceutical efficacies and side effects. To evaluate the extravasation behavior of gold nanoparticles (AuNPs), currently the most popular drug delivery system, in a mouse tumor model, in this study we employed push-pull perfusion (PPP) as a means of continuously sampling tumor extracellular AuNPs. To facilitate quantification of the extravasated AuNPs through inductively coupled plasma mass spectrometry, we also developed a novel online open-tubular fractionation scheme to allow interference-free determination of the sampled extracellular AuNPs from the coexisting biological matrix. After optimizing the flow-through volume and flow rate of this proposed fractionation scheme, we found that (i) the system's temporal resolution was 7.5h(-1), (ii) the stability presented by the coefficient of variation was less than 10% (6-h continuous measurement), and (iii) the detection limits for the administered AuNPs were in the range 0.057-0.068μgL(-1). Following an intravenous dosage of AuNPs (0.3mgkg(-1) body weight), in vivo acquired profiles indicated that the pegylated AuNPs (PEG-AuNPs) had greater tendency toward extravasating into the tumor extracellular space. We also observed that the accumulation of nanoparticles in the whole tumor tissues was higher for PEG-AuNPs than for non-pegylated ones. Overall, pegylation appears to promote the extravasation and accumulation of AuNPs for nano-drug delivery applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Adaptive sampling of AEM transients

    NASA Astrophysics Data System (ADS)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  3. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  4. Efficiency of exchange schemes in replica exchange

    NASA Astrophysics Data System (ADS)

    Lingenheil, Martin; Denschlag, Robert; Mathias, Gerald; Tavan, Paul

    2009-08-01

    In replica exchange simulations a fast diffusion of the replicas through the temperature space maximizes the efficiency of the statistical sampling. Here, we compare the diffusion speed as measured by the round trip rates for four exchange algorithms. We find different efficiency profiles with optimal average acceptance probabilities ranging from 8% to 41%. The best performance is determined by benchmark simulations for the most widely used algorithm, which alternately tries to exchange all even and all odd replica pairs. By analytical mathematics we show that the excellent performance of this exchange scheme is due to the high diffusivity of the underlying random walk.

  5. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  6. An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning

    PubMed Central

    Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco

    2015-01-01

    Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130

  7. Quantum teleportation scheme by selecting one of multiple output ports

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi; Hiroshima, Tohya

    2009-04-01

    The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.

  8. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  9. Alternative difference analysis scheme combining R -space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions.R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure changemore » in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3spin crossover complex and yielded reliable distance change and excitation population.« less

  10. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  11. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  12. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports

    PubMed Central

    Oh, Sungmin; Jang, Ju Wook

    2017-01-01

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet. PMID:28273831

  13. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports.

    PubMed

    Oh, Sungmin; Jang, Ju Wook

    2017-03-03

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet.

  14. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  15. Optimal tumor sampling for immunostaining of biomarkers in breast carcinoma

    PubMed Central

    2011-01-01

    Introduction Biomarkers, such as Estrogen Receptor, are used to determine therapy and prognosis in breast carcinoma. Immunostaining assays of biomarker expression have a high rate of inaccuracy; for example, estimates are as high as 20% for Estrogen Receptor. Biomarkers have been shown to be heterogeneously expressed in breast tumors and this heterogeneity may contribute to the inaccuracy of immunostaining assays. Currently, no evidence-based standards exist for the amount of tumor that must be sampled in order to correct for biomarker heterogeneity. The aim of this study was to determine the optimal number of 20X fields that are necessary to estimate a representative measurement of expression in a whole tissue section for selected biomarkers: ER, HER-2, AKT, ERK, S6K1, GAPDH, Cytokeratin, and MAP-Tau. Methods Two collections of whole tissue sections of breast carcinoma were immunostained for biomarkers. Expression was quantified using the Automated Quantitative Analysis (AQUA) method of quantitative immunofluorescence. Simulated sampling of various numbers of fields (ranging from one to thirty five) was performed for each marker. The optimal number was selected for each marker via resampling techniques and minimization of prediction error over an independent test set. Results The optimal number of 20X fields varied by biomarker, ranging between three to fourteen fields. More heterogeneous markers, such as MAP-Tau protein, required a larger sample of 20X fields to produce representative measurement. Conclusions The optimal number of 20X fields that must be sampled to produce a representative measurement of biomarker expression varies by marker with more heterogeneous markers requiring a larger number. The clinical implication of these findings is that breast biopsies consisting of a small number of fields may be inadequate to represent whole tumor biomarker expression for many markers. Additionally, for biomarkers newly introduced into clinical use, especially if

  16. Sample treatment optimization for fish stool metabolomics.

    PubMed

    Hano, Takeshi; Ito, Mana; Ito, Katsutoshi; Uchida, Motoharu

    2018-06-07

    Gut microbiota play an essential role in an organism's health. The fecal metabolite profiling content reflects these microbiota-mediated physiological changes in various organisms, including fish. Therefore, metabolomics analysis of fish feces should provide insight into the dynamics linking physiology and gut microbiota. However, metabolites are often unstable in aquatic environments, making fecal metabolites difficult to examine in fish. In this study, a novel method using gas chromatography-mass spectrometry (GC-MS) was developed and optimized for the preparation of metabolomics samples from the feces of the marine fish, red sea bream (Pagrus major). The preparation methodology was optimized, focusing on rinsing frequency and rinsing solvent. Feces (collected within 4 h of excretion) were rinsed three times with sterilized 2.5% NaCl solution or 3.0% artificial seawater (ASW). Among the 86 metabolites identified in the NaCl-rinsed samples, 57 showed superior recovery to that in ASW-rinsed samples, indicating that NaCl is a better rinsing solvent, particularly for amino acids, organic acids, and fatty acids. To evaluate rinsing frequency, fecal samples were rinsed with NaCl solution 0, 1, 3, or 5 times. The results indicate that three or more rinses enabled robust and stable detection of metabolites encapsulated within the solid fecal residue. Furthermore, these data suggest that rinsing is unnecessary when studying sugars, amino acids, and sterols, again highlighting the need for appropriate rinsing solvent and frequency. This study provides further insight into the use of fecal samples to evaluate and promote fish health during farming and supports the application of this and similar analyses to study the effects of environmental fluctuations and/or contamination. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Impact of Sampling Schemes on Demographic Inference: An Empirical Study in Two Species with Different Mating Systems and Demographic Histories

    PubMed Central

    St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.

    2012-01-01

    Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403

  18. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  19. A General Investigation of Optimized Atmospheric Sample Duration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Miley, Harry S.

    2012-11-28

    ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclidesmore » (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.« less

  20. Inhibition of viscous fluid fingering: A variational scheme for optimal flow rates

    NASA Astrophysics Data System (ADS)

    Miranda, Jose; Dias, Eduardo; Alvarez-Lacalle, Enrique; Carvalho, Marcio

    2012-11-01

    Conventional viscous fingering flow in radial Hele-Shaw cells employs a constant injection rate, resulting in the emergence of branched interfacial shapes. The search for mechanisms to prevent the development of these bifurcated morphologies is relevant to a number of areas in science and technology. A challenging problem is how best to choose the pumping rate in order to restrain growth of interfacial amplitudes. We use an analytical variational scheme to look for the precise functional form of such an optimal flow rate. We find it increases linearly with time in a specific manner so that interface disturbances are minimized. Experiments and nonlinear numerical simulations support the effectiveness of this particularly simple, but not at all obvious, pattern controlling process. J.A.M., E.O.D. and M.S.C. thank CNPq/Brazil for financial support. E.A.L. acknowledges support from Secretaria de Estado de IDI Spain under project FIS2011-28820-C02-01.

  1. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  2. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution.

    PubMed

    Pennacchio, Francesco; Vanacore, Giovanni M; Mancini, Giulia F; Oppermann, Malte; Jayaraman, Rajeswari; Musumeci, Pietro; Baum, Peter; Carbone, Fabrizio

    2017-07-01

    Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 10 5 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect). Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons.

  3. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution

    PubMed Central

    Pennacchio, Francesco; Vanacore, Giovanni M.; Mancini, Giulia F.; Oppermann, Malte; Jayaraman, Rajeswari; Musumeci, Pietro; Baum, Peter; Carbone, Fabrizio

    2017-01-01

    Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect). Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons. PMID:28713841

  4. Exploring Sampling in the Detection of Multicategory EEG Signals

    PubMed Central

    Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun

    2015-01-01

    The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705

  5. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning

    PubMed Central

    Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron

    2014-01-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474

  6. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    PubMed

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  7. Optimized testing schemes using nucleus progeny, adult MOET siblings, or juvenile MOET pedigrees in dairy cattle closed populations.

    PubMed

    Bondoc, O L; Smith, C

    1993-01-12

    Selection response rates are predicted for nucleus progeny testing, adult multiple ovulation and embryo transfer (MOET) sib testing, and juvenile MOET pedigree testing in closed populations. Deterministic models are adapted to adjust predicted genetic gains for the effects of population size and structure, selection disequilibrium, sampling losses, and inbreeding depression. The improvement schemes were optimized for different numbers of sires used and first lactation females recorded per year. The number of nucleus daughters tested per sire, and of females per MOET full sibship, that maximize the predicted response to selection per year, were determined. Annual genetic gains and inbreeding rates were interpolated to the same planning horizon (20 years) to compare the optimized schemes for a wide range of situations. The predicted maximum genetic gain per year is higher for adult MOET than for juvenile MOET (due to the proportional extra time to collect the embryos needed) and for nucleus progeny testing. Average annual inbreeding rates are much higher for MOET schemes than for nucleus progeny testing. The advantages of adult and juvenile over nucleus progeny testing are little affected by planning horizon, but are higher with more females recorded per year, higher heritability, and higher reproductive and MOET success rates. Comparison of the schemes at the same level of inbreeding is fairer for fixed testing resources. At the same inbreeding level, the genetic advantage of adult MOET was generally maintained, but juvenile MOET then lost its advantage over progeny testing in these closed populations. ZUSAMMENFASSUNG: Optimierte Prüfpläne für Nukleusnachkommen, adulte MOET Geschwister oder juvenile MOET Pedigrees in geschlossenen Milchviehpopulationen Selektionserfolge werden für Nukleusnachkommenprüfung, adulte multiple Ovulation (MOET) und Embryotransfer, Geschwisterprüfung und juvenile MOET Pedigreeprüfung geschlossener Populationen gesch

  8. Convergence Analysis of the Graph Allen-Cahn Scheme

    DTIC Science & Technology

    2016-02-01

    CONVERGENCE ANALYSIS OF THE GRAPH ALLEN-CAHN SCHEME ∗ XIYANG LUO† AND ANDREA L. BERTOZZI† Abstract. Graph partitioning problems have a wide range of...optimization, convergence and monotonicity are shown for a class of schemes under a graph-independent timestep restriction. We also analyze the effects of...spectral truncation, a common technique used to save computational cost. Convergence of the scheme with spectral truncation is also proved under a

  9. A simple language to script and simulate breeding schemes: the breeding scheme language

    USDA-ARS?s Scientific Manuscript database

    It is difficult for plant breeders to determine an optimal breeding strategy given that the problem involves many factors, such as target trait genetic architecture and breeding resource availability. There are many possible breeding schemes for each breeding program. Although simulation study may b...

  10. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  11. Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju

    2018-03-01

    A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.

  12. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  13. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without

  14. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    PubMed

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  15. Optimization of the combined proton acceleration regime with a target composition scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W. P.; Graduate School, China Academy of Engineering Physics, Beijing 100088; Li, B. W., E-mail: li-baiwen@iapcm.ac.cn

    A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CHmore » target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 10{sup 22} W cm{sup −2}, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.« less

  16. Optimization schemes for the inversion of Bouguer gravity anomalies

    NASA Astrophysics Data System (ADS)

    Zamora, Azucena

    associated with structural changes [16]; therefore, it complements those geophysical methods with the same depth resolution that sample a different physical property (e.g. electromagnetic surveys sampling electric conductivity) or even those with different depth resolution sampling an alternative physical property (e.g. large scale seismic reflection surveys imaging the crust and top upper mantle using seismic velocity fields). In order to improve the resolution of Bouguer gravity anomalies, and reduce their ambiguity and uncertainty for the modeling of the shallow crust, we propose the implementation of primal-dual interior point methods for the optimization of density structure models through the introduction of physical constraints for transitional areas obtained from previously acquired geophysical data sets. This dissertation presents in Chapter 2 an initial forward model implementation for the calculation of Bouguer gravity anomalies in the Porphyry Copper-Molybdenum (Cu-Mo) Copper Flat Mine region located in Sierra County, New Mexico. In Chapter 3, we present a constrained optimization framework (using interior-point methods) for the inversion of 2-D models of Earth structures delineating density contrasts of anomalous bodies in uniform regions and/or boundaries between layers in layered environments. We implement the proposed algorithm using three different synthetic gravitational data sets with varying complexity. Specifically, we improve the 2-dimensional density structure models by getting rid of unacceptable solutions (geologically unfeasible models or those not satisfying the required constraints) given the reduction of the solution space. Chapter 4 shows the results from the implementation of our algorithm for the inversion of gravitational data obtained from the area surrounding the Porphyry Cu-Mo Cooper Flat Mine in Sierra County, NM. Information obtained from previous induced polarization surveys and core samples served as physical constraints for the

  17. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  18. Sampling scheme on genetic structure of tree species in fragmented tropical dry forest: an evaluation from landscape genetic simulations

    Treesearch

    Yessica Rico; Marie-Stephanie Samain

    2017-01-01

    Investigating how genetic variation is distributed across the landscape is fundamental to inform forest conservation and restoration. Detecting spatial genetic discontinuities has value for defining management units, germplasm collection, and target sites for reforestation; however, inappropriate sampling schemes can misidentify patterns of genetic structure....

  19. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  20. A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests

    Treesearch

    SHARON A. CANTRELL

    2004-01-01

    Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 ×...

  1. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater

    PubMed Central

    Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal

    2016-01-01

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016

  2. An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    1987-01-01

    A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.

  3. Optimal satellite sampling to resolve global-scale dynamics in the I-T system

    NASA Astrophysics Data System (ADS)

    Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.

    2016-12-01

    The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.

  4. Dispersion-relation-preserving finite difference schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1993-01-01

    Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

  5. Optimal CCD readout by digital correlated double sampling

    NASA Astrophysics Data System (ADS)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  6. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  7. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  8. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  9. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  10. 163 years of refinement: the British Geological Survey sample registration scheme

    NASA Astrophysics Data System (ADS)

    Howe, M. P.

    2011-12-01

    The British Geological Survey manages the largest UK geoscience samples collection, including: - 15,000 onshore boreholes, including over 250 km of drillcore - Vibrocores, gravity cores and grab samples from over 32,000 UK marine sample stations. 640 boreholes - Over 3 million UK fossils, including a "type and stratigraphic" reference collection of 250,000 fossils, 30,000 of which are "type, figured or cited" - Comprehensive microfossil collection, including many borehole samples - 290km of drillcore and 4.5 million cuttings samples from over 8000 UK continental shelf hydrocarbon wells - Over one million mineralogical and petrological samples, including 200,00 thin sections The current registration scheme was introduced in 1848 and is similar to that used by Charles Darwin on the Beagle. Every Survey collector or geologist has been issue with a unique prefix code of one or more letters and these were handwritten on preprinted numbers, arranged in books of 1 - 5,000 and 5,001 to 10,000. Similar labels are now computer printed. Other prefix codes are used for corporate collections, such as borehole samples, thin sections, microfossils, macrofossil sections, museum reference fossils, display quality rock samples and fossil casts. Such numbers infer significant immediate information to the curator, without the need to consult detailed registers. The registration numbers have been recorded in a series of over 1,000 registers, complete with metadata including sample ID, locality, horizon, collector and date. Citations are added as appropriate. Parent-child relationships are noted when re-registering subsubsamples. For example, a borehole sample BDA1001 could have been subsampled for a petrological thin section and off-cut (E14159), a fossil thin section (PF365), micropalynological slides (MPA273), one of which included a new holotype (MPK111), and a figured macrofossil (GSE1314). All main corporate collection now have publically-available online databases, such as Palaeo

  11. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  12. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  13. Enhanced Conformational Sampling in Molecular Dynamics Simulations of Solvated Peptides: Fragment-Based Local Elevation Umbrella Sampling.

    PubMed

    Hansen, Halvor S; Daura, Xavier; Hünenberger, Philippe H

    2010-09-14

    A new method, fragment-based local elevation umbrella sampling (FB-LEUS), is proposed to enhance the conformational sampling in explicit-solvent molecular dynamics (MD) simulations of solvated polymers. The method is derived from the local elevation umbrella sampling (LEUS) method [ Hansen and Hünenberger , J. Comput. Chem. 2010 , 31 , 1 - 23 ], which combines the local elevation (LE) conformational searching and the umbrella sampling (US) conformational sampling approaches into a single scheme. In LEUS, an initial (relatively short) LE build-up (searching) phase is used to construct an optimized (grid-based) biasing potential within a subspace of conformationally relevant degrees of freedom, which is then frozen and used in a (comparatively longer) US sampling phase. This combination dramatically enhances the sampling power of MD simulations but, due to computational and memory costs, is only applicable to relevant subspaces of low dimensionalities. As an attempt to expand the scope of the LEUS approach to solvated polymers with more than a few relevant degrees of freedom, the FB-LEUS scheme involves an US sampling phase that relies on a superposition of low-dimensionality biasing potentials optimized using LEUS at the fragment level. The feasibility of this approach is tested using polyalanine (poly-Ala) and polyvaline (poly-Val) oligopeptides. Two-dimensional biasing potentials are preoptimized at the monopeptide level, and subsequently applied to all dihedral-angle pairs within oligopeptides of 4,  6,  8, or 10 residues. Two types of fragment-based biasing potentials are distinguished: (i) the basin-filling (BF) potentials act so as to "fill" free-energy basins up to a prescribed free-energy level above the global minimum; (ii) the valley-digging (VD) potentials act so as to "dig" valleys between the (four) free-energy minima of the two-dimensional maps, preserving barriers (relative to linearly interpolated free-energy changes) of a prescribed magnitude

  14. Finding an Optimal Thermo-Mechanical Processing Scheme for a Gum-Type Ti-Nb-Zr-Fe-O Alloy

    NASA Astrophysics Data System (ADS)

    Nocivin, Anna; Cojocaru, Vasile Danut; Raducanu, Doina; Cinca, Ion; Angelescu, Maria Lucia; Dan, Ioan; Serban, Nicolae; Cojocaru, Mirela

    2017-09-01

    A gum-type alloy was subjected to a thermo-mechanical processing scheme to establish a suitable process for obtaining superior structural and behavioural characteristics. Three processes were proposed: a homogenization treatment, a cold-rolling process and a solution treatment with three heating temperatures: 1073 K (800 °C), 1173 K (900 °C) and 1273 K (1000 °C). Results of all three proposed processes were analyzed using x-ray diffraction and scanning electron microscopy imaging, to establish and compare the structural modifications. The behavioural status was completed with micro-hardness and tensile strength tests. The optimal results were obtained for solution treatment at 1073 K.

  15. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    PubMed

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  16. A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.

    PubMed

    Zhang, Yu; Zhang, Bing; Zhang, Shi

    2017-06-02

    Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.

  17. Optimizing Performance of a Microwave Salinity Mapper: STARRS L-Band Radiometer Enhancements

    DTIC Science & Technology

    2007-05-04

    Contribution Number NRL/JA/ setts at Amherst, along with Quadrant Engineering Inc. 7330-05-5313. (now ProSensing Inc.) and funding from the National...Thus, sampling schemes development of STARRS by Quadrant Engineering can be optimized for particular applications. STARRS began under Naval Research...performance to be optimized. As we show later, a draw- ture, computed as the mean temperature from four back of this approach not found in analog

  18. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  19. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  20. Benefits of incorporating the adaptive dynamic range optimization amplification scheme into an assistive listening device for people with mild or moderate hearing loss.

    PubMed

    Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai

    2017-08-28

    This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.

  1. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  2. Evaluation of an Optimal Epidemiological Typing Scheme for Legionella pneumophila with Whole-Genome Sequence Data Using Validation Guidelines

    PubMed Central

    Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.

    2016-01-01

    Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420

  3. Evaluation of an Optimal Epidemiological Typing Scheme for Legionella pneumophila with Whole-Genome Sequence Data Using Validation Guidelines.

    PubMed

    David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G

    2016-08-01

    Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.

  4. Factorizable Schemes for the Equations of Fluid Flow

    NASA Technical Reports Server (NTRS)

    Sidilkover, David

    1999-01-01

    We present an upwind high-resolution factorizable (UHF) discrete scheme for the compressible Euler equations that allows to distinguish between full-potential and advection factors at the discrete level. The scheme approximates equations in their general conservative form and is related to the family of genuinely multidimensional upwind schemes developed previously and demonstrated to have good shock-capturing capabilities. A unique property of this scheme is that in addition to the aforementioned features it is also factorizable, i.e., it allows to distinguish between full-potential and advection factors at the discrete level. The latter property facilitates the construction of optimally efficient multigrid solvers. This is done through a relaxation procedure that utilizes the factorizability property.

  5. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  6. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  7. A new family of high-order compact upwind difference schemes with good spectral resolution

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.

    2007-12-01

    This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.

  8. Optimization of techniques for multiple platform testing in small, precious samples such as human chorionic villus sampling.

    PubMed

    Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John

    2016-11-01

    Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  9. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  10. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  11. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  12. Multi-Hierarchical Gray Correlation Analysis Applied in the Selection of Green Building Design Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Chuanghong

    2018-02-01

    As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.

  13. Study on the design schemes of the air-conditioning system in a gymnasium

    NASA Astrophysics Data System (ADS)

    Zhang, Yujin; Wu, Xinwei; Zhang, Jing; Pan, Zhixin

    2017-08-01

    In view of designing the air conditioning project for a gymnasium successfully, the cooling and heating source schemes are fully studied by analyzing the surrounding environment and energy conditions of the project, as well as the analysis of the initial investment and operating costs, which indicates the air source heat pump air conditioning system is the best choice for the project. The indoor air conditioning schemes are also studied systematically and the optimization of air conditioning schemes is carried out in each area. The principle of operating conditions for the whole year is followed and the quality of indoor air and energy-saving are ensured by the optimized design schemes, which provide references for the air conditioning system design in the same kinds of building.

  14. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    USGS Publications Warehouse

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  15. Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.

    PubMed

    Kahi, A K; Hirooka, H

    2005-09-01

    Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live

  16. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  17. A 3D Optimal Interpolation Assimilation Scheme of HF Radar Current Data into a Numerical Ocean Model

    NASA Astrophysics Data System (ADS)

    Ragnoli, Emanuele; Zhuk, Sergiy; Donncha, Fearghal O.; Suits, Frank; Hartnett, Michael

    2013-04-01

    In this work a technique for the 3D assimilation of ocean surface current measurements into a numerical ocean model based on data from High Frequency Radar (HFR) systems is presented. The technique is the combination of supplementary forcing on the surface and of and Ekman layer projection of the correction in the depth. Optimal interpolation through BLUE (Best Linear Unbiased Estimator) of the model predicted velocity and HFR observations is computed in order to derive a supplementary forcing applied at the surface boundary. In the depth the assimilation is propagated using an additional Ekman pumping (vertical velocity) based on the correction achieved by BLUE. In this work a HFR data assimilation system for hydrodynamic modelling of Galway Bay in Ireland is developed; it demonstrates the viability of adopting data assimilation techniques to improve the performance of numerical models in regions characterized by significant wind-driven flows. A network of CODAR Seasonde high frequency radars (HFR) deployed within Galway Bay, on the West Coast of Ireland, provides flow measurements adopted for this study. This system provides real-time synoptic measurements of both ocean surface currents and ocean surface waves in regions of the bay where radials from two or more radars intersect. Radar systems have a number of unique advantages in ocean modelling data assimilation schemes, namely, the ability to provide two-dimensional mapping of surface currents at resolutions that capture the complex structure related to coastal topography and the intrinsic instability scales of coastal circulation at a relatively low-cost. The radar system used in this study operates at a frequency of 25MHz which provides a sampling range of 25km at a spatial resolution of 300m.A detailed dataset of HFR observed velocities is collected at 60 minute intervals for a period chosen for comparison due to frequent occurrences of highly-energetic, storm-force events. In conjunction with this, a

  18. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  20. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    PubMed

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  1. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  2. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  3. An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption

    NASA Astrophysics Data System (ADS)

    Sun, Yanhua; Hao, Zhe; Zhang, Yanhua

    2018-01-01

    With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.

  4. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range.

    PubMed

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-06-01

    The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.

  5. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  6. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  7. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  8. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  9. Optimal Sampling to Provide User-Specific Climate Information.

    NASA Astrophysics Data System (ADS)

    Panturat, Suwanna

    The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is

  10. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  12. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a

  13. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  14. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  15. Fourth order scheme for wavelet based solution of Black-Scholes equation

    NASA Astrophysics Data System (ADS)

    Finěk, Václav

    2017-12-01

    The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.

  16. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  17. A Suboptimal Power-Saving Transmission Scheme in Multiple Component Carrier Networks

    NASA Astrophysics Data System (ADS)

    Chung, Yao-Liang; Tsai, Zsehong

    Power consumption due to transmissions in base stations (BSs) has been a major contributor to communication-related CO2 emissions. A power optimization model is developed in this study with respect to radio resource allocation and activation in a multiple Component Carrier (CC) environment. We formulate and solve the power-minimization problem of the BS transceivers for multiple-CC networks with carrier aggregation, while maintaining the overall system and respective users' utilities above minimum levels. The optimized power consumption based on this model can be viewed as a lower bound of that of other algorithms employed in practice. A suboptimal scheme with low computation complexity is proposed. Numerical results show that the power consumption of our scheme is much better than that of the conventional one in which all CCs are always active, if both schemes maintain the same required utilities.

  18. Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples

    NASA Astrophysics Data System (ADS)

    Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav

    2014-05-01

    Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).

  19. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  20. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    NASA Astrophysics Data System (ADS)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  1. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  2. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  3. XAFSmass: a program for calculating the optimal mass of XAFS samples

    NASA Astrophysics Data System (ADS)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  4. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    NASA Astrophysics Data System (ADS)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  5. A Novel, Simplified Scheme for Plastics Identification: "JCE" Classroom Activity 104

    ERIC Educational Resources Information Center

    Harris, Mary E.; Walker, Barbara

    2010-01-01

    In this Activity, students identify samples of seven types of recyclable plastic by using a flowchart scheme. The flowchart procedure includes making density comparisons of the plastic samples in water and alcohol and observing physical changes of plastic samples subjected to boiling water temperatures and exposure to acetone. This scheme is…

  6. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  7. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  8. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  9. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  10. A pyramid scheme for three-dimensional diffusion equations on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Hang, Xudeng; Yuan, Guangwei

    2017-12-01

    In this paper, a new cell-centered finite volume scheme is proposed for three-dimensional diffusion equations on polyhedral meshes, which is called as pyramid scheme (P-scheme). The scheme is designed for polyhedral cells with nonplanar cell-faces. The normal flux on a nonplanar cell-face is discretized on a planar face, which is determined by a simple optimization procedure. The resulted discrete form of the normal flux involves only cell-centered and cell-vertex unknowns, and is free from face-centered unknowns. In the case of hexahedral meshes with skewed nonplanar cell-faces, a quite simple expression is obtained for the discrete normal flux. Compared with the second order accurate O-scheme [31], the P-scheme is more robust and the discretization cost is reduced remarkably. Numerical results are presented to show the performance of the P-scheme on various kinds of distorted meshes. In particular, the P-scheme is shown to be second order accurate.

  11. Experimental investigation on ignition schemes of partially covered cavities in a supersonic flow

    NASA Astrophysics Data System (ADS)

    Cai, Zun; Sun, Mingbo; Wang, Hongbo; Wang, Zhenguo

    2016-04-01

    In this study, ignition schemes of the partially covered cavity in a scramjet combustor were investigated under inflow conditions of Ma=2.1 with stagnation pressure P0=0.7 Mpa and stagnation temperature T0=947 K. It reveals that the ignition scheme of the partially covered cavity has a great impact on the ignition and flame stabilization process. There always exists an optimized global equivalence ratio of a fixed ignition scheme, and the optimized global equivalence ratio of ignition in the partially covered cavity is lower than that of the uncovered cavity. For tandem dual-cavities, ignition in the partially covered cavity could be enhanced with the optimization of global equivalence ratio. However, ignition in the partially covered cavity would be exacerbated with further increasing the global equivalence ratio. The global equivalence ratio and the jet penetration height have a strong coupling with the combustion flow-field. For multi-cavities, it is assured that fuel injection on the opposite side could hardly be ignited after ignition in the partially covered cavity even with the optimized global equivalence ratio. It is possible to realize ignition enhancement in the partially covered cavity with the optimization of global equivalence ratio, but it is not beneficial for thrust increment during the steady combustion process.

  12. Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes

    NASA Astrophysics Data System (ADS)

    Don, Wai-Sun; Borges, Rafael

    2013-10-01

    In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of

  13. Optimal Congestion Management in Electricity Market Using Particle Swarm Optimization with Time Varying Acceleration Coefficients

    NASA Astrophysics Data System (ADS)

    Boonyaritdachochai, Panida; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes an optimal power redispatching approach for congestion management in deregulated electricity market. Generator sensitivity is considered to indicate the redispatched generators. It can reduce the number of participating generators. The power adjustment cost and total redispatched power are minimized by particle swarm optimization with time varying acceleration coefficients (PSO-TVAC). The IEEE 30-bus and IEEE 118-bus systems are used to illustrate the proposed approach. Test results show that the proposed optimization scheme provides the lowest adjustment cost and redispatched power compared to the other schemes. The proposed approach is useful for the system operator to manage the transmission congestion.

  14. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  15. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  16. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  17. A Study on a Centralized Under-Voltage Load Shedding Scheme Considering the Load Characteristics

    NASA Astrophysics Data System (ADS)

    Deng, Jiyu; Liu, Junyong

    Under-voltage load shedding is an important measure for maintaining voltage stability.Aiming at the optimal load shedding problem considering the load characteristics,firstly,the traditional under-voltage load shedding scheme based on a static load model may cause the analysis inaccurate is pointed out on the equivalent Thevenin circuit.Then,the dynamic voltage stability margin indicator is derived through local measurement.The derived indicator can reflect the voltage change of the key area in a myopia linear way.Dimensions of the optimal problem will be greatly simplified using this indicator.In the end,mathematical model of the centralized load shedding scheme is built with the indicator considering load characteristics.HSPPSO is introduced to slove the optimal problem.Simulation results on IEEE-39 system show that the proposed scheme display a good adaptability in solving the under-voltage load shedding considering dynamic load characteristics.

  18. A user-driven treadmill control scheme for simulating overground locomotion.

    PubMed

    Kim, Jonghyun; Stanley, Christopher J; Curatalo, Lindsey A; Park, Hyung-Soon

    2012-01-01

    Treadmill-based locomotor training should simulate overground walking as closely as possible for optimal skill transfer. The constant speed of a standard treadmill encourages automaticity rather than engagement and fails to simulate the variable speeds encountered during real-world walking. To address this limitation, this paper proposes a user-driven treadmill velocity control scheme that allows the user to experience natural fluctuations in walking velocity with minimal unwanted inertial force due to acceleration/deceleration of the treadmill belt. A smart estimation limiter in the scheme effectively attenuates the inertial force during velocity changes. The proposed scheme requires measurement of pelvic and swing foot motions, and is developed for a treadmill of typical belt length (1.5 m). The proposed scheme is quantitatively evaluated here with four healthy subjects by comparing it with the most advanced control scheme identified in the literature.

  19. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    PubMed

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  20. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  1. Sparse Learning with Stochastic Composite Optimization.

    PubMed

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  2. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  3. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  4. Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM

    NASA Astrophysics Data System (ADS)

    Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan

    2017-09-01

    In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.

  5. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  6. Optimal reconstruction of the states in qutrit systems

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Yang, Ming; Cao, Zhuo-Liang

    2010-10-01

    Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.

  7. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Li, Y; Tian, Z

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less

  8. Classification and Evaluation of Coherent Synchronous Sampled-Data Telemetry Systems

    NASA Technical Reports Server (NTRS)

    Viterbi, Andrew

    1961-01-01

    This paper analyzes the various types of continuous wave and pulse modulation for the transmission of sampled data over channels perturbed by white gaussian noise. Optimal coherent synchronous detection schemes for all the different modulation methods are shown to belong to one of two general classes: linear synchronous detection and correlation detection. The figures of merit, mean-square signal-to-error ratio and bandwidth occupancy, are determined for each system and compared.

  9. Resource optimization scheme for multimedia-enabled wireless mesh networks.

    PubMed

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young

    2014-08-08

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.

  10. Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks

    PubMed Central

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young

    2014-01-01

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241

  11. Design Method of Digital Optimal Control Scheme and Multiple Paralleled Bridge Type Current Amplifier for Generating Gradient Magnetic Fields in MRI Systems

    NASA Astrophysics Data System (ADS)

    Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo

    This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.

  12. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  13. Multiple indicator cokriging with application to optimal sampling for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.

    2005-02-01

    A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzkoptimal spatial sampling strategy is required for environmental monitoring, is used to demonstrate the methodology and the use of the program. The purpose of the sampling is to determine the spatial extent of a specified phase (e.g. areal distribution of a pollutant or contaminant). The sampling is sequential with a new sample being added to an existing network at each step of the procedure. The optimal

  14. Improved scheme for Cross-track Infrared Sounder geolocation assessment and optimization

    NASA Astrophysics Data System (ADS)

    Wang, Likun; Zhang, Bin; Tremblay, Denis; Han, Yong

    2017-01-01

    An improved scheme for Cross-track Infrared Sounder (CrIS) geolocation assessment for all scan angles (from -48.5° to 48.5°) is developed in this study. The method uses spatially collocated radiance measurements from the Visible Infrared Imaging Radiometer Suite (VIIRS) image band I5 to evaluate the geolocation performance of the CrIS Sensor Data Records (SDR) by taking advantage of its high spatial resolution (375 m at nadir) and accurate geolocation. The basic idea is to perturb CrIS line-of-sight vectors along the in-track and cross-track directions to find a position where CrIS and VIIRS data matches more closely. The perturbation angles at this best matched position are then used to evaluate the CrIS geolocation accuracy. More importantly, the new method is capable of performing postlaunch on-orbit geometric calibration by optimizing mapping angle parameters based on the assessment results and thus can be further extended to the following CrIS sensors on new satellites. Finally, the proposed method is employed to evaluate the CrIS geolocation accuracy on current Suomi National Polar-orbiting Partnership satellite. The error characteristics are revealed along the scan positions in the in-track and cross-track directions. It is found that there are relatively large errors ( 4 km) in the cross-track direction close to the end of scan positions. With newly updated mapping angles, the geolocation accuracy is greatly improved for all scan positions (less than 0.3 km). This makes CrIS and VIIRS spatially align together and thus benefits the application that needs combination of CrIS and VIIRS measurements and products.

  15. Optimization of a Sample Processing Protocol for Recovery of ...

    EPA Pesticide Factsheets

    Journal Article Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps.

  16. Optical realization of optimal symmetric real state quantum cloning machine

    NASA Astrophysics Data System (ADS)

    Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu

    2010-01-01

    We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  17. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  18. Characterization and optimization of polycrystalline Si70%Ge30% for surface micromachined thermopiles in human body applications

    NASA Astrophysics Data System (ADS)

    Wang, Ziyang; Fiorini, Paolo; Leonov, Vladimir; Van Hoof, Chris

    2009-09-01

    This paper presents the material characterization methods, characterization results and the optimization scheme for polycrystalline Si70%Ge30% (poly-SiGe) from the perspective of its application in a surface micromachined thermopile. Due to its comparative advantages, such as lower thermal conductivity and ease of processing, over other materials, poly-SiGe is chosen to fabricate a surface micromachined thermopile and eventually a wearable thermoelectric generator (TEG) to be used on a human body. To enable optimal design of advanced thermocouple microstructures, poly-SiGe sample materials prepared by two different techniques, namely low-pressure chemical vapor deposition (LPCVD) with in situ doping and rapid thermal chemical vapor deposition (RTCVD) with ion implantation, have been characterized. Relevant material properties, including electrical resistivity, Seebeck coefficient, thermal conductivity and specific contact resistance, have been reported. For the determination of thermal conductivity, a novel surface-micromachined test structure based on the Seebeck effect is designed, fabricated and measured. Compared to the traditional test structures, it is more advantageous for sample materials with a relatively large Seebeck coefficient, such as poly-SiGe. Based on the characterization results, a further optimization scheme is suggested to allow independent respective optimization of the figure of merit and the specific contact resistance.

  19. Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha

    2018-05-01

    General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.

  20. Optimal Control for Quantum Driving of Two-Level Systems

    NASA Astrophysics Data System (ADS)

    Qi, Xiao-Qiu

    2018-01-01

    In this paper, the optimal quantum control of two-level systems is studied by the decompositions of SU(2). Using the Pontryagin maximum principle, the minimum time of quantum control is analyzed in detail. The solution scheme of the optimal control function is given in the general case. Finally, two specific cases, which can be applied in many quantum systems, are used to illustrate the scheme, while the corresponding optimal control functions are obtained.

  1. Matching soil salinization and cropping systems in communally managed irrigation schemes

    NASA Astrophysics Data System (ADS)

    Malota, Mphatso; Mchenga, Joshua

    2018-03-01

    Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi < 2 dS/m), the irrigation water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.

  2. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimal probes for withdrawal of uncontaminated fluid samples

    NASA Astrophysics Data System (ADS)

    Sherwood, J. D.

    2005-08-01

    Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa βa is saturated with fluid 2; the two fluids have the same viscosity. It is assumed that the interface between the two fluids is sharp and remains so as it moves through the rock. The pressure in the probe is lower than that of the pore fluid in the rock, so that the fluid interface is convected with the fluids towards the probe. This idealized axisymmetric problem is solved numerically, and it is shown that an analysis based on far-field spherical flow towards a point sink is a good approximation when the nondimensional depth of fluid 1 is large, i.e., β ≫1. The inner sampling probe eventually produces pure fluid 2, and this technique has been proposed for sampling pore fluids in rock surrounding an oil well [A. Hrametz, C. Gardner, M. Wais, and M. Proett, U.S. Patent No. 6,301,959 B1 (16 October 2001)]. Fluid 1 is drilling fluid filtrate, which has displaced the original pore fluid (fluid 2), a pure sample of which is required. The time required to collect an uncontaminated sample of original pore fluid can be minimized by a suitable choice of the probe geometry α [J. Sherwood, J. Fitzgerald and B. Hill, U.S. Patent No. 6,719,049 B2 (13 April 2004)]. It is shown that the optimal choice of α depends on the depth of filtrate invasion β and the volume of sample required.

  4. Optimization of Evans blue quantitation in limited rat tissue samples

    PubMed Central

    Wang, Hwai-Lee; Lai, Ted Weita

    2014-01-01

    Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting. PMID:25300427

  5. Optimization of Evans blue quantitation in limited rat tissue samples

    NASA Astrophysics Data System (ADS)

    Wang, Hwai-Lee; Lai, Ted Weita

    2014-10-01

    Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.

  6. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  7. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  8. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  9. Optimized probability sampling of study sites to improve generalizability in a multisite intervention trial.

    PubMed

    Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D

    2010-01-01

    Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.

  10. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  11. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  12. Parameter Tuning and Calibration of RegCM3 with MIT-Emanuel Cumulus Parameterization Scheme over CORDEX East Asian Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Liwei; Qian, Yun; Zhou, Tianjun

    2014-10-01

    In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less

  13. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    PubMed

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  14. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  16. Palladium-based Mass-Tag Cell Barcoding with a Doublet-Filtering Scheme and Single Cell Deconvolution Algorithm

    PubMed Central

    Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.

    2015-01-01

    SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231

  17. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  18. On optimal designs of transparent WDM networks with 1 + 1 protection leveraged by all-optical XOR network coding schemes

    NASA Astrophysics Data System (ADS)

    Dao, Thanh Hai

    2018-01-01

    Network coding techniques are seen as the new dimension to improve the network performances thanks to the capability of utilizing network resources more efficiently. Indeed, the application of network coding to the realm of failure recovery in optical networks has been marking a major departure from traditional protection schemes as it could potentially achieve both rapid recovery and capacity improvement, challenging the prevailing wisdom of trading capacity efficiency for speed recovery and vice versa. In this context, the maturing of all-optical XOR technologies appears as a good match to the necessity of a more efficient protection in transparent optical networks. In addressing this opportunity, we propose to use a practical all-optical XOR network coding to leverage the conventional 1 + 1 optical path protection in transparent WDM optical networks. The network coding-assisted protection solution combines protection flows of two demands sharing the same destination node in supportive conditions, paving the way for reducing the backup capacity. A novel mathematical model taking into account the operation of new protection scheme for optimal network designs is formulated as the integer linear programming. Numerical results based on extensive simulations on realistic topologies, COST239 and NSFNET networks, are presented to highlight the benefits of our proposal compared to the conventional approach in terms of wavelength resources efficiency and network throughput.

  19. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs

    PubMed Central

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-01-01

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme. PMID:28338620

  20. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs.

    PubMed

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-03-24

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme.

  1. Estimating effect of environmental contaminants on women's subfecundity for the MoBa study data with an outcome-dependent sampling scheme

    PubMed Central

    Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P.

    2014-01-01

    Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419

  2. The modified scheme of optimized in simulations Cherenkov type high-power microwave oscillator without guiding magnetic field

    NASA Astrophysics Data System (ADS)

    Guo, Li M.; Shu, T.; Li, Zhi Q.; Ju, Jin C.

    2017-12-01

    The compactness and miniaturization of high-power-microwave (HPM) systems are drawing more and more attention. Based on this demand, HPM generators without a guiding magnetic field are being developed. This paper presents an X-band Cherenkov type HPM oscillator without the guiding magnetic field. By particle-in-cell codes, this oscillator achieves an efficiency of 40% in simulation. When the diode voltage and current are 620 kV and 9.0 kA, respectively, a TEM mode microwave is generated with a power of 2.2 GW and a frequency of 9.1 GHz. In this oscillator, electrons are modulated in both longitudinal and radial directions, and the radial modulation has a significant effect on the energy conversion efficiency. As analyzed in this paper, the different radial modulation effects depend on the phase matching differences of the microwave and electrons. The modified scheme of simulations achieves a structure with an efficient longitudinal beam-wave interaction and optimized radial modulation.

  3. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please

  4. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.

  5. PSO-tuned PID controller for coupled tank system via priority-based fitness scheme

    NASA Astrophysics Data System (ADS)

    Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal

    2015-05-01

    The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.

  6. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    PubMed

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  7. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that

  8. An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme

    NASA Astrophysics Data System (ADS)

    Maity, Sayan; Gunjan, Kumar; Das, Swagatam

    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.

  9. Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability

    DTIC Science & Technology

    2015-07-01

    12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015...Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...criterion is usually the failure probability . In this paper, we examine the buffered failure probability as an attractive alternative to the failure

  10. A second order derivative scheme based on Bregman algorithm class

    NASA Astrophysics Data System (ADS)

    Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia

    2016-10-01

    The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.

  11. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly

  12. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  13. Performance of laboratories analysing welding fume on filter samples: results from the WASP proficiency testing scheme.

    PubMed

    Stacey, Peter; Butler, Owen

    2008-06-01

    This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were <50%. This level of under-reporting has significant implications for any health or hygiene studies of the exposure of welders to toxic metals for the types of fumes included in this study. Good laboratories' performance measuring spiked WASP filter samples containing soluble metal salts did not guarantee good performance when measuring the more complex welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.

  14. Optimization and Control of Cyber-Physical Vehicle Systems

    PubMed Central

    Bradley, Justin M.; Atkins, Ella M.

    2015-01-01

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541

  15. Optimization and Control of Cyber-Physical Vehicle Systems.

    PubMed

    Bradley, Justin M; Atkins, Ella M

    2015-09-11

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  16. Multi-level optimization of a beam-like space truss utilizing a continuum model

    NASA Technical Reports Server (NTRS)

    Yates, K.; Gurdal, Z.; Thangjitham, S.

    1992-01-01

    A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.

  17. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  18. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  19. Quantum money with nearly optimal error tolerance

    NASA Astrophysics Data System (ADS)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  20. Towards a true aerosol-and-cloud retrieval scheme

    NASA Astrophysics Data System (ADS)

    Thomas, Gareth; Poulsen, Caroline; Povey, Adam; McGarragh, Greg; Jerg, Matthias; Siddans, Richard; Grainger, Don

    2014-05-01

    The Optimal Retrieval of Aerosol and Cloud (ORAC) - formally the Oxford-RAL Aerosol and Cloud retrieval - offers a framework that can provide consistent and well characterised properties of both aerosols and clouds from a range of imaging satellite instruments. Several practical issues stand in the way of achieving the potential of this combined scheme however; in particular the sometimes conflicting priorities and requirements of aerosol and cloud retrieval problems, and the question of the unambiguous identification of aerosol and cloud pixels. This presentation will present recent developments made to the ORAC scheme for both aerosol and cloud, and detail how these are being integrated into a single retrieval framework. The implementation of a probabilistic method for pixel identification will also be presented, for both cloud detection and aerosol/cloud type selection. The method is based on Bayesian methods applied the optimal estimation retrieval output of ORAC and is particularly aimed at providing additional information in the so-called "twilight zone", where pixels can't be unambiguously identified as either aerosol or cloud and traditional cloud or aerosol products do not provide results.

  1. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Numerical pricing of options using high-order compact finite difference schemes

    NASA Astrophysics Data System (ADS)

    Tangman, D. Y.; Gopaul, A.; Bhuruth, M.

    2008-09-01

    We consider high-order compact (HOC) schemes for quasilinear parabolic partial differential equations to discretise the Black-Scholes PDE for the numerical pricing of European and American options. We show that for the heat equation with smooth initial conditions, the HOC schemes attain clear fourth-order convergence but fail if non-smooth payoff conditions are used. To restore the fourth-order convergence, we use a grid stretching that concentrates grid nodes at the strike price for European options. For an American option, an efficient procedure is also described to compute the option price, Greeks and the optimal exercise curve. Comparisons with a fourth-order non-compact scheme are also done. However, fourth-order convergence is not experienced with this strategy. To improve the convergence rate for American options, we discuss the use of a front-fixing transformation with the HOC scheme. We also show that the HOC scheme with grid stretching along the asset price dimension gives accurate numerical solutions for European options under stochastic volatility.

  3. Energy management of three-dimensional minimum-time intercept. [for aircraft flight optimization

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Cliff, E. M.; Visser, H. G.

    1985-01-01

    A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot (bank angle and load factor) consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. Some numerical results comparing open-loop optimal and approximate feedback solutions are presented.

  4. Design and Analysis of a Dynamic Mobility Management Scheme for Wireless Mesh Network

    PubMed Central

    Roy, Sudipta

    2013-01-01

    Seamless mobility management of the mesh clients (MCs) in wireless mesh network (WMN) has drawn a lot of attention from the research community. A number of mobility management schemes such as mesh network with mobility management (MEMO), mesh mobility management (M3), and wireless mesh mobility management (WMM) have been proposed. The common problem with these schemes is that they impose uniform criteria on all the MCs for sending route update message irrespective of their distinct characteristics. This paper proposes a session-to-mobility ratio (SMR) based dynamic mobility management scheme for handling both internet and intranet traffic. To reduce the total communication cost, this scheme considers each MC's session and mobility characteristics by dynamically determining optimal threshold SMR value for each MC. A numerical analysis of the proposed scheme has been carried out. Comparison with other schemes shows that the proposed scheme outperforms MEMO, M3, and WMM with respect to total cost. PMID:24311982

  5. A Miniaturized Spectrometer for Optimized Selection of Subsurface Samples for Future MSR Missions

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Altieri, F.; De Angelis, S.; Ferrari, M.; Frigeri, A.; Biondi, D.; Novi, S.; Antonacci, F.; Gabrieli, R.; Paolinetti, R.; Villa, F.; Ammannito, A.; Mugnuolo, R.; Pirrotta, S.

    2018-04-01

    We present the concept of a miniaturized spectrometer based on the ExoMars2020/Ma_MISS experiment. Coupled with a drill tool, it will allow an assessment of subsurface composition and optimize the selection of martian samples with a high astrobiological potential.

  6. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  7. A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty

    NASA Astrophysics Data System (ADS)

    Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.

    2016-10-01

    A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.

  8. Microbiological assessment along the fish production chain of the Norwegian pelagic fisheries sector--Results from a spot sampling programme.

    PubMed

    Svanevik, Cecilie Smith; Roiha, Irja Sunde; Levsen, Arne; Lunestad, Bjørn Tore

    2015-10-01

    Microbes play an important role in the degradation of fish products, thus better knowledge of the microbiological conditions throughout the fish production chain may help to optimise product quality and resource utilisation. This paper presents the results of a ten-year spot sampling programme (2005-2014) of the commercially most important pelagic fish species harvested in Norway. Fish-, surface-, and storage water samples were collected from fishing vessels and processing factories. Totally 1,181 samples were assessed with respect to microbiological quality, hygiene and food safety. We introduce a quality and safety assessment scheme for fresh pelagic fish recommending limits for heterotrophic plate counts (HPC), thermos tolerant coliforms, enterococci and Listeria monocytogenes. According to the scheme, in 25 of 41 samplings, sub-optimal conditions were found with respect to quality, whereas in 21 and 9 samplings, samples were not in compliance concerning hygiene and food safety, respectively. The present study has revealed that the quality of pelagic fish can be optimised by improving the hygiene conditions at some critical points at an early phase of the production chain. Thus, the proposed assessment scheme may provide a useful tool for the industry to optimise quality and maintain consumer safety of pelagic fishery products. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Optimal flight trajectories in the presence of windshear, 1984-86

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1986-01-01

    Optimal flight trajectories were determined in the presence of windshear and guidance schemes were developed for near optimum flight in a windshear. This is a wind characterized by sharp change in intensity and direction over a relatively small region of space. This problem is important in the takeoff and landing of both civilian airplanes and military airplanes and is key to aircraft saftey. The topics covered in reference to takeoff problems are: equations of motion, problem formulation, algorithms, optimal flight trajectories, advanced guidance schemes, simplified guidance schemes, and piloting strategies.

  10. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    PubMed

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  11. An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2013-09-01

    In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.

  12. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2017-04-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  13. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    NASA Technical Reports Server (NTRS)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  14. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    PubMed

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  15. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  16. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  17. Data traffic reduction schemes for sparse Cholesky factorizations

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1988-01-01

    Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.

  18. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    PubMed

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  19. A joint precoding scheme for indoor downlink multi-user MIMO VLC systems

    NASA Astrophysics Data System (ADS)

    Zhao, Qiong; Fan, Yangyu; Kang, Bochao

    2017-11-01

    In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.

  20. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  1. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  2. Multiple crack detection in 3D using a stable XFEM and global optimization

    NASA Astrophysics Data System (ADS)

    Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.

    2018-02-01

    A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.

  3. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  4. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    PubMed

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  5. A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Shi, Hongli; Luo, Shuqian

    2010-12-01

    In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.

  6. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of

  7. Research on crude oil storage and transportation based on optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  8. Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.

    2007-01-01

    On the surface of the moon and not only during heightened solar activities the radiation environment is such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three radiation sources (galactic, solar, and fission) are employed in a one-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty mainly in lunar regolith attenuation properties in addition to the radiation quality factor can easily defeat this and similar optimization schemes.

  9. Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions

    NASA Technical Reports Server (NTRS)

    Barghouty, Abdulnasser F.

    2007-01-01

    On the surface of the moon -and not only during heightened solar activities- the radiation environment As such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three (galactic, solar, and fission) radiation sources am employed in a 1-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar-regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty -mainly in lunar regolith attenuation properties in addition to the radiation quality factor- can easily defeat this and similar optimization schemes.

  10. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    PubMed

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  11. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  12. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  13. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  14. Optimization of Pulse Sequences in MRI Scheme

    NASA Astrophysics Data System (ADS)

    Roy, Subhankar; Hu, Jianping; Ummal Momeen, M.

    2018-04-01

    Magnetic resonance imaging (MRI) has a wide range of applications towards imaging the human body. In this work we have solved the Bloch equations for different magnetic field gradients along the transverse direction. We have modified the magnetic field components based on the relaxation terms and solved the field gradient as well as the field components for both off –pulse and on -pulse configurations. In particular we focus on different pulse sequences and optimize them to realize the best possible output. We have analyzed the field components along transverse direction because the rotation of the object to form the image by emitting signal is along the xy plane.

  15. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  16. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  17. Evaluation and optimization of microbial DNA extraction from fecal samples of wild Antarctic bird species

    PubMed Central

    Eriksson, Per; Mourkas, Evangelos; González-Acuna, Daniel; Olsen, Björn; Ellström, Patrik

    2017-01-01

    ABSTRACT Introduction: Advances in the development of nucleic acid-based methods have dramatically facilitated studies of host–microbial interactions. Fecal DNA analysis can provide information about the host’s microbiota and gastrointestinal pathogen burden. Numerous studies have been conducted in mammals, yet birds are less well studied. Avian fecal DNA extraction has proved challenging, partly due to the mixture of fecal and urinary excretions and the deficiency of optimized protocols. This study presents an evaluation of the performance in avian fecal DNA extraction of six commercial kits from different bird species, focusing on penguins. Material and methods: Six DNA extraction kits were first tested according to the manufacturers’ instructions using mallard feces. The kit giving the highest DNA yield was selected for further optimization and evaluation using Antarctic bird feces. Results: Penguin feces constitute a challenging sample type: most of the DNA extraction kits failed to yield acceptable amounts of DNA. The QIAamp cador Pathogen kit (Qiagen) performed the best in the initial investigation. Further optimization of the protocol resulted in good yields of high-quality DNA from seven bird species of different avian orders. Conclusion: This study presents an optimized approach to DNA extraction from challenging avian fecal samples. PMID:29152162

  18. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    PubMed

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  20. Improved Scheme of Modified Gaussian Deconvolution for Reflectance Spectra of Lunar Soils

    NASA Technical Reports Server (NTRS)

    Hiroi, T.; Pieters, C. M.; Noble, S. K.

    2000-01-01

    In our continuing effort for deconvolving reflectance spectra of lunar soils using the modified Gaussian model, a new scheme has been developed, including a new form of continuum. All the parameters are optimized with certain constraints.

  1. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Simplified two-dimensional microwave imaging scheme using metamaterial-loaded Vivaldi antenna

    NASA Astrophysics Data System (ADS)

    Johari, Esha; Akhter, Zubair; Bhaskar, Manoj; Akhtar, M. Jaleel

    2017-03-01

    In this paper, a highly efficient, low-cost scheme for two-dimensional microwave imaging is proposed. To this end, the AZIM (anisotropic zero index metamaterial) cell-loaded Vivaldi antenna is designed and tested as effective electromagnetic radiation beam source required in the microwave imaging scheme. The designed antenna is first individually tested in the anechoic chamber, and its directivity along with the radiation pattern is obtained. The measurement setup for the imaging here involves a vector network analyzer, the AZIM cell-loaded ultra-wideband Vivaldi antenna, and other associated microwave components. The potential of the designed antenna for the microwave imaging is tested by first obtaining the two-dimensional reflectivity images of metallic samples of different shapes placed in front of the antenna, using the proposed scheme. In the next step, these sets of samples are hidden behind wooden blocks of different thicknesses and the reflectivity image of the test media is reconstructed by using the proposed scheme. Finally, the reflectivity images of various dielectric samples (Teflon, Plexiglas, permanent magnet moving coil) along with the copper sheet placed on a piece of cardboard are reconstructed by using the proposed setup. The images obtained for each case are plotted and compared with the actual objects, and a close match is observed which shows the applicability of the proposed scheme for through-wall imaging and the detection of concealed objects.

  3. Numerical study of read scheme in one-selector one-resistor crossbar array

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  4. Optimization of sampling parameters for collection and preconcentration of alveolar air by needle traps.

    PubMed

    Filipiak, Wojciech; Filipiak, Anna; Ager, Clemens; Wiesenhofer, Helmut; Amann, Anton

    2012-06-01

    The approach for breath-VOCs' collection and preconcentration by applying needle traps was developed and optimized. The alveolar air was collected from only a few exhalations under visual control of expired CO(2) into a large gas-tight glass syringe and then warmed up to 45 °C for a short time to avoid condensation. Subsequently, a specially constructed sampling device equipped with Bronkhorst® electronic flow controllers was used for automated adsorption. This sampling device allows time-saving collection of expired/inspired air in parallel onto three different needle traps as well as improvement of sensitivity and reproducibility of NT-GC-MS analysis by collection of relatively large (up to 150 ml) volume of exhaled breath. It was shown that the collection of alveolar air derived from only a few exhalations into a large syringe followed by automated adsorption on needle traps yields better results than manual sorption by up/down cycles with a 1 ml syringe, mostly due to avoided condensation and electronically controlled stable sample flow rate. The optimal profile and composition of needle traps consists of 2 cm Carbopack X and 1 cm Carboxen 1000, allowing highly efficient VOCs' enrichment, while injection by a fast expansive flow technique requires no modifications in instrumentation and fully automated GC-MS analysis can be performed with a commercially available autosampler. This optimized analytical procedure considerably facilitates the collection and enrichment of alveolar air, and is therefore suitable for application at the bedside of critically ill patients in an intensive care unit. Due to its simplicity it can replace the time-consuming sampling of sufficient breath volume by numerous up/down cycles with a 1 ml syringe.

  5. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  6. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  7. Modular and configurable optimal sequence alignment software: Cola.

    PubMed

    Zamani, Neda; Sundström, Görel; Höppner, Marc P; Grabherr, Manfred G

    2014-01-01

    The fundamental challenge in optimally aligning homologous sequences is to define a scoring scheme that best reflects the underlying biological processes. Maximising the overall number of matches in the alignment does not always reflect the patterns by which nucleotides mutate. Efficiently implemented algorithms that can be parameterised to accommodate more complex non-linear scoring schemes are thus desirable. We present Cola, alignment software that implements different optimal alignment algorithms, also allowing for scoring contiguous matches of nucleotides in a nonlinear manner. The latter places more emphasis on short, highly conserved motifs, and less on the surrounding nucleotides, which can be more diverged. To illustrate the differences, we report results from aligning 14,100 sequences from 3' untranslated regions of human genes to 25 of their mammalian counterparts, where we found that a nonlinear scoring scheme is more consistent than a linear scheme in detecting short, conserved motifs. Cola is freely available under LPGL from https://github.com/nedaz/cola.

  8. Performance tuning Weather Research and Forecasting (WRF) Goddard longwave radiative transfer scheme on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2015-10-01

    Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.

  9. Optimally cloned binary coherent states

    NASA Astrophysics Data System (ADS)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  10. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  11. Efficient Secure and Privacy-Preserving Route Reporting Scheme for VANETs

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanfei; Pei, Qianwen; Dai, Feifei; Zhang, Lei

    2017-10-01

    Vehicular ad-hoc network (VANET) is a core component of intelligent traffic management system which could provide various of applications such as accident prediction, route reporting, etc. Due to the problems caused by traffic congestion, route reporting becomes a prospective application which can help a driver to get optimal route to save her travel time. Before enjoying the convenience of route reporting, security and privacy-preserving issues need to be concerned. In this paper, we propose a new secure and privacy-preserving route reporting scheme for VANETs. In our scheme, only an authenticated vehicle can use the route reporting service provided by the traffic management center. Further, a vehicle may receive the response from the traffic management center with low latency and without violating the privacy of the vehicle. Experiment results show that our scheme is much more efficiency than the existing one.

  12. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related

  13. Optimization for Peptide Sample Preparation for Urine Peptidomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan

    2014-02-25

    proteins when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.« less

  14. Evaluating the Performance of Single and Double Moment Microphysics Schemes During a Synoptic-Scale Snowfall Event

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.

    2011-01-01

    Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models

  15. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    PubMed

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  16. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel

  17. Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.

    PubMed

    Sun, Kangkang; Sui, Shuai; Tong, Shaocheng

    2018-04-01

    This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.

  18. Low-thrust trajectory optimization of asteroid sample return mission with multiple revolutions and moon gravity assists

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, FanHuag; Li, JunFeng

    2015-11-01

    Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.

  19. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  20. Passive tracking scheme for a single stationary observer

    NASA Astrophysics Data System (ADS)

    Chan, Y. T.; Rea, Terry

    2001-08-01

    While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.

  1. Fast rerouting schemes for protected mobile IP over MPLS networks

    NASA Astrophysics Data System (ADS)

    Wen, Chih-Chao; Chang, Sheng-Yi; Chen, Huan; Chen, Kim-Joan

    2005-10-01

    Fast rerouting is a critical traffic engineering operation in the MPLS networks. To implement the Mobile IP service over the MPLS network, one can collaborate with the fast rerouting operation to enhance the availability and survivability. MPLS can protect critical LSP tunnel between Home Agent (HA) and Foreign Agent (FA) using the fast rerouting scheme. In this paper, we propose a simple but efficient algorithm to address the triangle routing problem for the Mobile IP over the MPLS networks. We consider this routing issue as a link weighting and capacity assignment (LW-CA) problem. The derived solution is used to plan the fast restoration mechanism to protect the link or node failure. In this paper, we first model the LW-CA problem as a mixed integer optimization problem. Our goal is to minimize the call blocking probability on the most congested working truck for the mobile IP connections. Many existing network topologies are used to evaluate the performance of our scheme. Results show that our proposed scheme can obtain the best performance in terms of the smallest blocking probability compared to other schemes.

  2. Phase II Evaluation of Clinical Coding Schemes

    PubMed Central

    Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith

    1997-01-01

    Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p

  3. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    PubMed

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2  = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.

  4. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  5. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  6. A more accurate scheme for calculating Earth's skin temperature

    NASA Astrophysics Data System (ADS)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  7. On the optimization of discrete structures with aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Ashley, H.

    1978-01-01

    The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.

  8. 'Just another incentive scheme': a qualitative interview study of a local pay-for-performance scheme for primary care.

    PubMed

    Hackett, Julia; Glidewell, Liz; West, Robert; Carder, Paul; Doran, Tim; Foy, Robbie

    2014-10-25

    A range of policy initiatives have addressed inequalities in healthcare and health outcomes. Local pay-for-performance schemes for primary care have been advocated as means of enhancing clinical ownership of the quality agenda and better targeting local need compared with national schemes such as the UK Quality and Outcomes Framework (QOF). We investigated whether professionals' experience of a local scheme in one English National Health Service (NHS) former primary care trust (PCT) differed from that of the national QOF in relation to the goal of reducing inequalities. We conducted retrospective semi-structured interviews with primary care professionals implementing the scheme and those involved in its development. We purposively sampled practices with varying levels of population socio-economic deprivation and achievement. Interviews explored perceptions of the scheme and indicators, likely mechanisms of influence on practice, perceived benefits and harms, and how future schemes could be improved. We used a framework approach to analysis. Thirty-eight professionals from 16 general practices and six professionals involved in developing local indicators participated. Our findings cover four themes: ownership, credibility of the indicators, influences on behaviour, and exacerbated tensions. We found little evidence that the scheme engendered any distinctive sense of ownership or experiences different from the national scheme. Although the indicators and their evidence base were seldom actively questioned, doubts were expressed about their focus on health promotion given that eventual benefits relied upon patient action and availability of local resources. Whilst practices serving more affluent populations reported status and patient benefit as motivators for participating in the scheme, those serving more deprived populations highlighted financial reward. The scheme exacerbated tensions between patient and professional consultation agendas, general practitioners

  9. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  10. Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BILISOLY, ROGER L.; MCKENNA, SEAN A.

    2003-01-01

    Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no

  11. Towards an optimal flow: Density-of-states-informed replica-exchange simulations

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-11-05

    Here we learn that replica exchange (RE) is one of the most popular enhanced-sampling simulations technique in use today. Despite widespread successes, RE simulations can sometimes fail to converge in practical amounts of time, e.g., when sampling around phase transitions, or when a few hard-to-find configurations dominate the statistical averages. We introduce a generalized RE scheme, density-of-states-informed RE, that addresses some of these challenges. The key feature of our approach is to inform the simulation with readily available, but commonly unused, information on the density of states of the system as the RE simulation proceeds. This enables two improvements, namely,more » the introduction of resampling moves that actively move the system towards equilibrium and the continual adaptation of the optimal temperature set. As a consequence of these two innovations, we show that the configuration flow in temperature space is optimized and that the overall convergence of RE simulations can be dramatically accelerated.« less

  12. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  13. Systematic ultrasound-guided saturation and template biopsy of the prostate: indications and advantages of extended sampling.

    PubMed

    Isbarn, Hendrik; Briganti, Alberto; De Visschere, Pieter J L; Fütterer, Jurgen J; Ghadjar, Pirus; Giannarini, Gianluca; Ost, Piet; Ploussard, Guillaume; Sooriakumaran, Prasanna; Surcel, Christian I; van Oort, Inge M; Yossepowitch, Ofer; van den Bergh, Roderick C N

    2015-04-01

    Prostate biopsy (PB) is the gold standard for the diagnosis of prostate cancer (PCa). However, the optimal number of biopsy cores remains debatable. We sought to compare contemporary standard (10-12 cores) vs. saturation (=18 cores) schemes on initial as well as repeat PB. A non-systematic review of the literature was performed from 2000 through 2013. Studies of highest evidence (randomized controlled trials, prospective non-randomized studies, and retrospective reports of high quality) comparing standard vs saturation schemes on initial and repeat PB were evaluated. Outcome measures were overall PCa detection rate, detection rate of insignificant PCa, and procedure-associated morbidity. On initial PB, there is growing evidence that a saturation scheme is associated with a higher PCa detection rate compared to a standard one in men with lower PSA levels (<10 ng/ml), larger prostates (>40 cc), or lower PSA density values (<0.25 ng/ml/cc). However, these cut-offs are not uniform and differ among studies. Detection rates of insignificant PCa do not differ in a significant fashion between standard and saturation biopsies. On repeat PB, PCa detection rate is likewise higher with saturation protocols. Estimates of insignificant PCa vary widely due to differing definitions of insignificant disease. However, the rates of insignificant PCa appear to be comparable for the schemes in patients with only one prior negative biopsy, while saturation biopsy seems to detect more cases of insignificant PCa compared to standard biopsy in men with two or more prior negative biopsies. Very extensive sampling is associated with a high rate of acute urinary retention, whereas other severe adverse events, such as sepsis, appear not to occur more frequently with saturation schemes. Current evidence suggests that saturation schemes are associated with a higher PCa detection rate compared to standard ones on initial PB in men with lower PSA levels or larger prostates, and on repeat PB. Since

  14. Corrections to the General (2,4) and (4,4) FDTD Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min

    The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less

  15. Implementing two optimal economical quantum cloning with superconducting quantum interference devices in a cavity

    NASA Astrophysics Data System (ADS)

    Ye, Liu; Hu, GuiYu; Li, AiXia

    2011-01-01

    We propose a unified scheme to implement the optimal 1 → 3 economical phase-covariant quantum cloning and optimal 1 → 3 economical real state cloning with superconducting quantum interference devices (SQUIDs) in a cavity. During this process, no transfer of quantum information between the SQUIDs and cavity is required. The cavity field is only virtually excited. The scheme is insensitive to cavity decay. Therefore, the scheme can be experimentally realized in the range of current cavity QED techniques.

  16. Army ants algorithm for rare event sampling of delocalized nonadiabatic transitions by trajectory surface hopping and the estimation of sampling errors by the bootstrap method.

    PubMed

    Nangia, Shikha; Jasper, Ahren W; Miller, Thomas F; Truhlar, Donald G

    2004-02-22

    The most widely used algorithm for Monte Carlo sampling of electronic transitions in trajectory surface hopping (TSH) calculations is the so-called anteater algorithm, which is inefficient for sampling low-probability nonadiabatic events. We present a new sampling scheme (called the army ants algorithm) for carrying out TSH calculations that is applicable to systems with any strength of coupling. The army ants algorithm is a form of rare event sampling whose efficiency is controlled by an input parameter. By choosing a suitable value of the input parameter the army ants algorithm can be reduced to the anteater algorithm (which is efficient for strongly coupled cases), and by optimizing the parameter the army ants algorithm may be efficiently applied to systems with low-probability events. To demonstrate the efficiency of the army ants algorithm, we performed atom-diatom scattering calculations on a model system involving weakly coupled electronic states. Fully converged quantum mechanical calculations were performed, and the probabilities for nonadiabatic reaction and nonreactive deexcitation (quenching) were found to be on the order of 10(-8). For such low-probability events the anteater sampling scheme requires a large number of trajectories ( approximately 10(10)) to obtain good statistics and converged semiclassical results. In contrast by using the new army ants algorithm converged results were obtained by running 10(5) trajectories. Furthermore, the results were found to be in excellent agreement with the quantum mechanical results. Sampling errors were estimated using the bootstrap method, which is validated for use with the army ants algorithm. (c) 2004 American Institute of Physics.

  17. Implementation of 5-layer thermal diffusion scheme in weather research and forecasting model with Intel Many Integrated Cores

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    For weather forecasting and research, the Weather Research and Forecasting (WRF) model has been developed, consisting of several components such as dynamic solvers and physical simulation modules. WRF includes several Land- Surface Models (LSMs). The LSMs use atmospheric information, the radiative and precipitation forcing from the surface layer scheme, the radiation scheme, and the microphysics/convective scheme all together with the land's state variables and land-surface properties, to provide heat and moisture fluxes over land and sea-ice points. The WRF 5-layer thermal diffusion simulation is an LSM based on the MM5 5-layer soil temperature model with an energy budget that includes radiation, sensible, and latent heat flux. The WRF LSMs are very suitable for massively parallel computation as there are no interactions among horizontal grid points. The features, efficient parallelization and vectorization essentials, of Intel Many Integrated Core (MIC) architecture allow us to optimize this WRF 5-layer thermal diffusion scheme. In this work, we present the results of the computing performance on this scheme with Intel MIC architecture. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.1x. Accordingly, the same CPU-based optimizations improved the performance on Intel Xeon E5- 2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  18. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  19. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  20. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  1. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl; Tijssen, Rob H.N.; Senneville, Baudouin D. de

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was foundmore » to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.« less

  2. Quantum Communication Using Coherent Rejection Sampling

    NASA Astrophysics Data System (ADS)

    Anshu, Anurag; Devabathini, Vamsi Krishna; Jain, Rahul

    2017-09-01

    Compression of a message up to the information it carries is key to many tasks involved in classical and quantum information theory. Schumacher [B. Schumacher, Phys. Rev. A 51, 2738 (1995), 10.1103/PhysRevA.51.2738] provided one of the first quantum compression schemes and several more general schemes have been developed ever since [M. Horodecki, J. Oppenheim, and A. Winter, Commun. Math. Phys. 269, 107 (2007); , 10.1007/s00220-006-0118-xI. Devetak and J. Yard, Phys. Rev. Lett. 100, 230501 (2008); , 10.1103/PhysRevLett.100.230501A. Abeyesinghe, I. Devetak, P. Hayden, and A. Winter, Proc. R. Soc. A 465, 2537 (2009), 10.1098/rspa.2009.0202]. However, the one-shot characterization of these quantum tasks is still under development, and often lacks a direct connection with analogous classical tasks. Here we show a new technique for the compression of quantum messages with the aid of entanglement. We devise a new tool that we call the convex split lemma, which is a coherent quantum analogue of the widely used rejection sampling procedure in classical communication protocols. As a consequence, we exhibit new explicit protocols with tight communication cost for quantum state merging, quantum state splitting, and quantum state redistribution (up to a certain optimization in the latter case). We also present a port-based teleportation scheme which uses a fewer number of ports in the presence of information about input.

  3. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    NASA Astrophysics Data System (ADS)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  4. A sampling scheme to assess persistence and transport characteristics of xenobiotics within an urban river section

    NASA Astrophysics Data System (ADS)

    Schwientek, Marc; Guillet, Gaelle; Kuch, Bertram; Rügner, Hermann; Grathwohl, Peter

    2014-05-01

    Xenobiotic contaminants such as pharmaceuticals or personal care products typically are continuously introduced into the receiving water bodies via wastewater treatment plant (WWTP) outfalls and, episodically, via combined sewer overflows in the case of precipitation events. Little is known about how these chemicals behave in the environment and how they affect ecosystems and human health. Examples of traditional persistent organic pollutants reveal, that they may still be present in the environment even decades after they have been released. In this study a sampling strategy was developed which gives valuable insights into the environmental behaviour of xenobiotic chemicals. The method is based on the Lagrangian sampling scheme by which a parcel of water is sampled repeatedly as it moves downstream while chemical, physical, and hydrologic processes altering the characteristics of the water mass can be investigated. The Steinlach is a tributary of the River Neckar in Southwest Germany with a catchment area of 140 km². It receives the effluents of a WWTP with 99,000 inhabitant equivalents 4 km upstream of its mouth. The varying flow rate of effluents induces temporal patterns of electrical conductivity in the river water which enable to track parcels of water along the subsequent urban river section. These parcels of water were sampled a) close to the outlet of the WWTP and b) 4 km downstream at the confluence with the Neckar. Sampling was repeated at a 15 min interval over a complete diurnal cycle and 2 h composite samples were prepared. A model-based analysis demonstrated, on the one hand, that substances behaved reactively to a varying extend along the studied river section. On the other hand, it revealed that the observed degradation rates are likely dependent on the time of day. Some chemicals were degraded mainly during daytime (e.g. the disinfectant Triclosan or the phosphorous flame retardant TDCP), others as well during nighttime (e.g. the musk fragrance

  5. Statistical Analysis for Collision-free Boson Sampling.

    PubMed

    Huang, He-Liang; Zhong, Han-Sen; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-11-10

    Boson sampling is strongly believed to be intractable for classical computers but solvable with photons in linear optics, which raises widespread concern as a rapid way to demonstrate the quantum supremacy. However, due to its solution is mathematically unverifiable, how to certify the experimental results becomes a major difficulty in the boson sampling experiment. Here, we develop a statistical analysis scheme to experimentally certify the collision-free boson sampling. Numerical simulations are performed to show the feasibility and practicability of our scheme, and the effects of realistic experimental conditions are also considered, demonstrating that our proposed scheme is experimentally friendly. Moreover, our broad approach is expected to be generally applied to investigate multi-particle coherent dynamics beyond the boson sampling.

  6. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    NASA Astrophysics Data System (ADS)

    He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang

    2016-08-01

    In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.

  7. Receding horizon online optimization for torque control of gasoline engines.

    PubMed

    Kang, Mingxin; Shen, Tielong

    2016-11-01

    This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  9. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  10. Distribution path robust optimization of electric vehicle with multiple distribution centers

    PubMed Central

    Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi

    2018-01-01

    To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169

  11. Optimization of municipal solid waste collection and transportation routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in

    2015-09-15

    Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less

  12. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  13. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg

  14. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  15. Pricing schemes for new drugs: a welfare analysis.

    PubMed

    Levaggi, Rosella

    2014-02-01

    Drug price regulation is acquiring increasing significance in the investment choices of the pharmaceutical sector. The overall objective is to determine an optimal trade-off between the incentives for innovation, consumer protection, and value for money. However, price regulation is itself a source of distortion. In this study, we examine the welfare properties of listing through a bargaining process and value-based pricing schemes. The latter are superior instruments to uncertain listing processes for maximising total welfare, but the distribution of the benefits between consumers and the industry depends on rate of rebate chosen by the regulator. However, through an appropriate choice, it is always possible to define a value-based pricing scheme with risk sharing, which both consumers and the industry prefer to an uncertain bargaining process. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  17. Optimizing cord blood sample cryopreservation.

    PubMed

    Harris, David T

    2012-03-01

    Cord blood (CB) banking is becoming more and more commonplace throughout the medical community, both in the USA and elsewhere. It is now generally recognized that storage of CB samples in multiple aliquots is the preferred approach to banking because it allows the greatest number of uses of the sample. However, it is unclear which are the best methodologies for cryopreservation and storage of the sample aliquots. In the current study we analyzed variables that could affect these processes. CB were processed into mononuclear cells (MNC) and frozen in commercially available human serum albumin (HSA) or autologous CB plasma using cryovials of various sizes and cryobags. The bacteriophage phiX174 was used as a model virus to test for cross-contamination. We observed that cryopreservation of CB in HSA, undiluted autologous human plasma and 50% diluted plasma was equivalent in terms of cell recovery and cell viability. We also found that cryopreservation of CB samples in either cryovials or cryobags displayed equivalent thermal characteristics. Finally, we demonstrated that overwrapping the CB storage container in an impermeable plastic sheathing was sufficient to prevent cross-sample viral contamination during prolonged storage in the liquid phase of liquid nitrogen dewar storage. CB may be cryopreserved in either vials or bags without concern for temperature stability. Sample overwrapping is sufficient to prevent microbiologic contamination of the samples while in liquid-phase liquid nitrogen storage.

  18. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  19. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  20. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System.

    PubMed

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-11-16

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  1. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    PubMed Central

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-01-01

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342

  2. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  3. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    PubMed

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  4. Optimization of LC-Orbitrap-HRMS acquisition and MZmine 2 data processing for nontarget screening of environmental samples using design of experiments.

    PubMed

    Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias

    2016-11-01

    Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.

  5. Salivary hormone and immune responses to three resistance exercise schemes in elite female athletes.

    PubMed

    Nunes, João A; Crewther, Blair T; Ugrinowitsch, Carlos; Tricoli, Valmor; Viveiros, Luís; de Rose, Dante; Aoki, Marcelo S

    2011-08-01

    This study examined the salivary hormone and immune responses of elite female athletes to 3 different resistance exercise schemes. Fourteen female basketball players each performed an endurance scheme (ES-4 sets of 12 reps, 60% of 1 repetition maximum (1RM) load, 1-minute rest periods), a strength-hypertrophy scheme (SHS-1 set of 5RM, 1 set of 4RM, 1 set of 3RM, 1 set of 2RM, and 1set of 1RM with 3-minute rest periods, followed by 3 sets of 10RM with 2-minute rest periods) and a power scheme (PS-3 sets of 10 reps, 50% 1RM load, 3-minute rest periods) using the same exercises (bench press, squat, and biceps curl). Saliva samples were collected at 07:30 hours, pre-exercise (Pre) at 09:30 hours, postexercise (Post), and at 17:30 hours. Matching samples were also taken on a nonexercising control day. The samples were analyzed for testosterone, cortisol (C), and immunoglobulin A concentrations. The total volume of load lifted differed among the 3 schemes (SHS > ES > PS, p < 0.05). Postexercise C concentrations increased after all schemes, compared to control values (p < 0.05). In the SHS, the postexercise C response was also greater than pre-exercise data (p < 0.05). The current findings confirm that high-volume resistance exercise schemes can stimulate greater C secretion because of higher metabolic demand. In terms of practical applications, acute changes in C may be used to evaluate the metabolic demands of different resistance exercise schemes, or as a tool for monitoring training strain.

  6. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  7. Development of advanced control schemes for telerobot manipulators

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1991-01-01

    To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.

  8. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  9. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  10. Novel MDM-PON scheme utilizing self-homodyne detection for high-speed/capacity access networks.

    PubMed

    Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Wu, Zhongying; Zhou, Peng; Tian, Yu; Ren, Fang; Yu, Jinyi; Ge, Dawei; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan

    2015-12-14

    In this paper, we propose a cost-effective, energy-saving mode-division-multiplexing passive optical network (MDM-PON) scheme utilizing self-homodyne detection for high-speed/capacity access network based on low modal-crosstalk few-mode fiber (FMF) and all-fiber mode multiplexer/demultiplexer (MUX/DEMUX). In the proposed scheme, one of the spatial modes is used to transmit a portion of signal carrier (namely pilot-tone) as the local oscillator (LO), while the others are used for signal-bearing channels. At the receiver, the pilot-tone and the signal can be separated without strong crosstalk and sent to the receiver for coherent detection. The spectral efficiency (SE) is significantly enhanced when multiple spatial channels are used. Meanwhile, the self-homodyne detection scheme can effectively suppress laser phase noise, which relaxes the requirement for the lasers line-width at the optical line terminal or optical network units (OLT/ONUs). The digital signal processing (DSP) at the receiver is also simplified since it removes the need for frequency offset compensation and complex phase correction, which reduces the computational complexity and energy consumption. Polarization division multiplexing (PDM) that offers doubled SE is also supported by the scheme. The proposed scheme is scalable to multi-wavelength application when wavelength MUX/DEMUX is utilized. Utilizing the proposed scheme, we demonstrate a proof of concept 4 × 40-Gb/s orthogonal frequency division multiplexing (OFDM) transmission over 55-km FMF using low modal-crosstalk two-mode FMF and MUX/DEMUX with error free operation. Compared with back to back case, less than 1-dB Q-factor penalty is observed after 55-km FMF of the four channels. Signal power and pilot-tone power are also optimized to achieve the optimal transmission performance.

  11. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  12. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    USGS Publications Warehouse

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  13. A frequency-based window width optimized two-dimensional S-Transform profilometry

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao

    2017-11-01

    A new scheme is proposed to as a frequency-based window width optimized two-dimensional S-Transform profilometry, in which parameters pu and pv are introduced to control the width of a two-dimensional Gaussian window. Unlike the standard two-dimensional S-transform using the Gaussian window with window width proportional to the reciprocal local frequency of the tested signal, the size of window width for the optimized two-dimensional S-Transform varies with the pu th (pv th) power of the reciprocal local frequency fx (fy) in x (y) direction. The paper gives a detailed theoretical analysis of optimized two-dimensional S-Transform in fringe analysis as well as the characteristics of the modified Gauss window. Simulations are applied to evaluate the proposed scheme, the results show that the new scheme has better noise reduction ability and can extract phase distribution more precise in comparison with the standard two-dimensional S-transform even though the surface of the measured object varies sharply. Finally, the proposed scheme is demonstrated on three-dimensional surface reconstruction for a complex plastic cat mask to show its effectiveness.

  14. Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System

    NASA Astrophysics Data System (ADS)

    Agarwal, Ruchi; Singh, Sanjeev

    2017-12-01

    The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.

  15. A Scheme for Regrouping WISC-R Subtests.

    ERIC Educational Resources Information Center

    Groff, Martin G.; Hubble, Larry M.

    1984-01-01

    Reviews WISC-R factor analytic findings for developing a scheme for regrouping WISC-R subtests, consisting of verbal comprehension and spatial subtests. Subtests comprising these groupings are shown to have more common variance than specific variance and cluster together consistently across the samples of WISC-R scores. (Author/JAC)

  16. The controlled growth method - A tool for structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Sobieszczanski-Sobieski, J.

    1981-01-01

    An adaptive design variable linking scheme in a NLP based optimization algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed optimization cycle. The proposed method is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.

  17. Sensorless optimal sinusoidal brushless direct current for hard disk drives

    NASA Astrophysics Data System (ADS)

    Soh, C. S.; Bi, C.

    2009-04-01

    Initiated by the availability of digital signal processors and emergence of new applications, market demands for permanent magnet synchronous motors have been surging. As its back-emf is sinusoidal, the drive current should also be sinusoidal for reducing the torque ripple. However, in applications like hard disk drives, brushless direct current (BLDC) drive is adopted instead of sinusoidal drive for simplification. The adoption, however, comes at the expense of increased harmonics, losses, torque pulsations, and acoustics. In this paper, we propose a sensorless optimal sinusoidal BLDC drive. First and foremost, the derivation for an optimal sinusoidal drive is presented, and a power angle control scheme is proposed to achieve an optimal sinusoidal BLDC. The scheme maintains linear relationship between the motor speed and drive voltage. In an attempt to execute the sensorless drive, an innovative power angle measurement scheme is devised, which takes advantage of the freewheeling diodes and measures the power angle through the detection of diode voltage drops. The objectives as laid out will be presented and discussed in this paper, supported by derivations, simulations, and experimental results. The proposed scheme is straightforward, brings about the benefits of sensorless sinusoidal drive, negates the need for current sensors by utilizing the freewheeling diodes, and does not incur additional cost.

  18. Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations

    PubMed Central

    Wang, Ming; Zhong, Lin

    2015-01-01

    In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948

  19. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.

    PubMed

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène

    2016-06-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  1. Quantum Communication Using Coherent Rejection Sampling.

    PubMed

    Anshu, Anurag; Devabathini, Vamsi Krishna; Jain, Rahul

    2017-09-22

    Compression of a message up to the information it carries is key to many tasks involved in classical and quantum information theory. Schumacher [B. Schumacher, Phys. Rev. A 51, 2738 (1995)PLRAAN1050-294710.1103/PhysRevA.51.2738] provided one of the first quantum compression schemes and several more general schemes have been developed ever since [M. Horodecki, J. Oppenheim, and A. Winter, Commun. Math. Phys. 269, 107 (2007); CMPHAY0010-361610.1007/s00220-006-0118-xI. Devetak and J. Yard, Phys. Rev. Lett. 100, 230501 (2008); PRLTAO0031-900710.1103/PhysRevLett.100.230501A. Abeyesinghe, I. Devetak, P. Hayden, and A. Winter, Proc. R. Soc. A 465, 2537 (2009)PRLAAZ1364-502110.1098/rspa.2009.0202]. However, the one-shot characterization of these quantum tasks is still under development, and often lacks a direct connection with analogous classical tasks. Here we show a new technique for the compression of quantum messages with the aid of entanglement. We devise a new tool that we call the convex split lemma, which is a coherent quantum analogue of the widely used rejection sampling procedure in classical communication protocols. As a consequence, we exhibit new explicit protocols with tight communication cost for quantum state merging, quantum state splitting, and quantum state redistribution (up to a certain optimization in the latter case). We also present a port-based teleportation scheme which uses a fewer number of ports in the presence of information about input.

  2. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  3. Development of the Brief Romantic Relationship Interaction Coding Scheme (BRRICS)

    PubMed Central

    Humbad, Mikhila N.; Donnellan, M. Brent; Klump, Kelly L.; Burt, S. Alexandra

    2012-01-01

    Although observational studies of romantic relationships are common, many existing coding schemes require considerable amounts of time and resources to implement. The current study presents a new coding scheme, the Brief Romantic Relationship Interaction Coding Scheme (BRRICS), designed to assess various aspects of romantic relationship both quickly and efficiently. The BRRICS consists of four individual coding dimensions assessing positive and negative affect in each member of the dyad, as well as four codes assessing specific components of the dyadic interaction (i.e., positive reciprocity, demand-withdraw pattern, negative reciprocity, and overall satisfaction). Concurrent associations with measures of marital adjustment and conflict were evaluated in a sample of 118 married couples participating in the Michigan State University Twin Registry. Couples were asked to discuss common conflicts in their marriage while being videotaped. Undergraduate coders used the BRRICS to rate these interactions. The BRRICS scales were correlated in expected directions with self-reports of marital adjustment, as well as children’s perception of the severity and frequency of marital conflict. Based on these results, the BRRICS may be an efficient tool for researchers with large samples of observational data who are interested in coding global aspects of the relationship but do not have the resources to use labor intensive schemes. PMID:21875192

  4. Wavenumber-extended high-order oscillation control finite volume schemes for multi-dimensional aeroacoustic computations

    NASA Astrophysics Data System (ADS)

    Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong

    2008-04-01

    A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.

  5. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

    PubMed Central

    Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.

    2017-01-01

    Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945

  6. A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes

    NASA Astrophysics Data System (ADS)

    Lundquist, Tomas; Malan, Arnaud; Nordström, Jan

    2018-06-01

    We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.

  7. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable

  8. The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2016-05-01

    Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.

  9. Optimal eavesdropping in cryptography with three-dimensional quantum states.

    PubMed

    Bruss, D; Macchiavello, C

    2002-03-25

    We study optimal eavesdropping in quantum cryptography with three-dimensional systems, and show that this scheme is more secure against symmetric attacks than protocols using two-dimensional states. We generalize the according eavesdropping transformation to arbitrary dimensions, and discuss the connection with optimal quantum cloning.

  10. Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.

  11. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    PubMed

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  12. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  13. Optical scheme for simulating post-quantum nonlocality distillation.

    PubMed

    Chu, Wen-Jing; Yang, Ming; Pan, Guo-Zhu; Yang, Qing; Cao, Zhuo-Liang

    2016-11-28

    An optical scheme for simulating nonlocality distillation is proposed in post-quantum regime. The nonlocal boxes are simulated by measurements on appropriately pre- and post-selected polarization entangled photon pairs, i.e. post-quantum nonlocality is simulated by exploiting fair-sampling loophole in a Bell test. Mod 2 addition on the outputs of two nonlocal boxes combined with pre- and post-selection operations constitutes the key operation of simulating nonlocality distillation. This scheme provides a possible tool for the experimental study on the nonlocality in post-quantum regime and the exact physical principle precisely distinguishing physically realizable correlations from nonphysical ones.

  14. A Parameter Estimation Scheme for Multiscale Kalman Smoother (MKS) Algorithm Used in Precipitation Data Fusion

    NASA Technical Reports Server (NTRS)

    Wang, Shugong; Liang, Xu

    2013-01-01

    A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.

  15. A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    PubMed Central

    Gil, Joon-Min; Han, Youn-Hee

    2011-01-01

    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387

  16. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    PubMed Central

    2013-01-01

    Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770

  17. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks.

    PubMed

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-11-12

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.

  18. Modeling and Analysis of Hybrid Cellular/WLAN Systems with Integrated Service-Based Vertical Handoff Schemes

    NASA Astrophysics Data System (ADS)

    Xia, Weiwei; Shen, Lianfeng

    We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.

  19. Competition and the Reference Pricing Scheme for pharmaceuticals.

    PubMed

    Ghislandi, Simone

    2011-12-01

    By introducing n (>1) firms with infinite cross-price elasticity (i.e. generic drugs), we explore the effects of competition on the optimal pricing strategies under a Reference Pricing Scheme (RPS). A two-stage model repeated infinite number of times is presented. When stage 1 is competitive, the equilibrium in pure strategies exists and is efficient only if the reference price (R) does not depend on the price of the branded product. When generics collude, the way R is designed is crucial for both the stability of the cartel among generics and the collusive prices in equilibrium. An optimally designed RPS must set R as a function only of the infinitely elastic side of the market and should provide the right incentives for competition. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. An artifact caused by undersampling optimal trees in supermatrix analyses of locally sampled characters.

    PubMed

    Simmons, Mark P; Goloboff, Pablo A

    2013-10-01

    Empirical and simulated examples are used to demonstrate an artifact caused by undersampling optimal trees in data matrices that consist mostly or entirely of locally sampled (as opposed to globally, for most or all terminals) characters. The artifact is that unsupported clades consisting entirely of terminals scored for the same locally sampled partition may be resolved and assigned high resampling support-despite their being properly unsupported (i.e., not resolved in the strict consensus of all optimal trees). This artifact occurs despite application of random-addition sequences for stepwise terminal addition. The artifact is not necessarily obviated with thorough conventional branch swapping methods (even tree-bisection-reconnection) when just a single tree is held, as is sometimes implemented in parsimony bootstrap pseudoreplicates, and in every GARLI, PhyML, and RAxML pseudoreplicate and search for the most likely tree for the matrix as a whole. Hence GARLI, RAxML, and PhyML-based likelihood results require extra scrutiny, particularly when they provide high resolution and support for clades that are entirely unsupported by methods that perform more thorough searches, as in most parsimony analyses. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Sample size for positive and negative predictive value in diagnostic research using case–control designs

    PubMed Central

    Steinberg, David M.; Fine, Jason; Chappell, Rick

    2009-01-01

    Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677

  2. An Automated Scheme for the Large-Scale Survey of Herbig-Haro Objects

    NASA Astrophysics Data System (ADS)

    Deng, Licai; Yang, Ji; Zheng, Zhongyuan; Jiang, Zhaoji

    2001-04-01

    Owing to their spectral properties, Herbig-Haro (HH) objects can be discovered using photometric methods through a combination of filters, sampling the characteristic spectral lines and the nearby continuum. The data are commonly processed through direct visual inspection of the images. To make data reduction more efficient and the results more uniform and complete, an automated searching scheme for HH objects is developed to manipulate the images using IRAF. This approach helps to extract images with only intrinsic HH emissions. By using this scheme, the pointlike stellar sources and extended nebulous sources with continuum emission can be eliminated from the original images. The objects with only characteristic HH emission become prominent and can be easily picked up. In this paper our scheme is illustrated by a sample field and has been applied to our surveys for HH objects.

  3. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  4. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  5. Design scheme for optical manufacturing support system of TMT M3 prototype

    NASA Astrophysics Data System (ADS)

    Hu, Haifei; Luo, Xiao

    2014-09-01

    Thirty Meter Telescope's Tertiary Mirror Cell Assembly (TMTM3-CA) will be manufactured in Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP). To reduce the risk of fabricating TMTM3, a prototype made of Zerodur with a d/t ratio of 72 is planned to be polished. Here the focus is on the design scheme of the prototype's optical manufacturing support system. Firstly the number of support points was estimated, then structural design scheme for equal-force polishing support system are drawn, and finally layout optimization of support points was carried out. As its high performance and efficiency, the work will be beneficial to manufacturing large thin mirrors.

  6. Signal enhancement for the sensitivity-limited solid state NMR experiments using a continuous, non-uniform acquisition scheme

    NASA Astrophysics Data System (ADS)

    Qiang, Wei

    2011-12-01

    We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.

  7. Hybrid Quantum Mechanics/Molecular Mechanics Solvation Scheme for Computing Free Energies of Reactions at Metal-Water Interfaces.

    PubMed

    Faheem, Muhammad; Heyden, Andreas

    2014-08-12

    We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.

  8. info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.

    PubMed

    Defrance, Matthieu; van Helden, Jacques

    2009-10-15

    Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs

  9. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  10. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    PubMed Central

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula

    2015-01-01

    Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567

  11. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  12. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  13. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  14. Quantum optimal control with automatic differentiation using graphics processors

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Chakram, Srivatsan; Naik, Ravi; Groszkowski, Peter; Koch, Jens; Schuster, David

    We implement quantum optimal control based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them into the optimization process with ease. We will describe efficient techniques to optimally control weakly anharmonic systems that are commonly encountered in circuit QED, including coupled superconducting transmon qubits and multi-cavity circuit QED systems. These systems allow for a rich variety of control schemes that quantum optimal control is well suited to explore.

  15. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration

  16. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  17. Optimal implicit 2-D finite differences to model wave propagation in poroelastic media

    NASA Astrophysics Data System (ADS)

    Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2016-08-01

    Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.

  18. Ecologically-friendly operation scheme for the Jinping cascaded reservoirs in the Yalongjiang River, China

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Chen, Qiuwen; Li, Ruonan; Blanckaert, Koen; Cai, Desuo

    2014-06-01

    Ecologically-friendly reservoir operation procedures aim to conserve key ecosystem properties in the rivers, while minimizing the sacrifice of socioeconomic interests. This study focused on the Jinping cascaded reservoirs as a case study. An optimization model was developed to explore a balance between the ecological flow requirement (EFR) of a target fish species ( Schizothorax chongi) in the dewatered natural channel section, and annual power production. The EFR for the channel was determined by the Tennant method and a fish habitat model, respectively. The optimization model was solved by using an adaptive real-coded genetic algorithm. Several operation scenarios corresponding to the ecological flow series were evaluated using the optimization model. Through comparisons, an optimal operational scheme, which combines relatively low power production loss with a preferred ecological flow regime in the dewatered channel, is proposed for the cascaded reservoirs. Under the recommended scheme, the discharge into the Dahewan river reach in the dry season ranges from 36 to 50 m3/s. This will enable at least 50% of the target fish habitats in the channel to be conserved, at a cost of only 2.5% annual power production loss. The study demonstrates that the use of EFRs is an efficient approach to the optimization of reservoir operation in an ecologically friendly way. Similar modeling, for other important fish species and ecosystem functions, supplemented by field validation of results, is needed in order to secure the long-term conservation of the affected river ecosystem.

  19. Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konjevod, Goran; Richa, Andréa W.; Xia, Donglin

    In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less

  20. Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension

    DOE PAGES

    Konjevod, Goran; Richa, Andréa W.; Xia, Donglin

    2016-06-15

    In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less

  1. A Decentralized Event-Triggered Dissipative Control Scheme for Systems With Multiple Sensors to Sample the System Outputs.

    PubMed

    Zhang, Xian-Ming; Han, Qing-Long

    2016-12-01

    This paper is concerned with decentralized event-triggered dissipative control for systems with the entries of the system outputs having different physical properties. Depending on these different physical properties, the entries of the system outputs are grouped into multiple nodes. A number of sensors are used to sample the signals from different nodes. A decentralized event-triggering scheme is introduced to select those necessary sampled-data packets to be transmitted so that communication resources can be saved significantly while preserving the prescribed closed-loop performance. First, in order to organize the decentralized data packets transmitted from the sensor nodes, a data packet processor (DPP) is used to generate a new signal to be held by the zero-order-hold once the signal stored by the DPP is updated at some time instant. Second, under the mechanism of the DPP, the resulting closed-loop system is modeled as a linear system with an interval time-varying delay. A sufficient condition is derived such that the closed-loop system is asymptotically stable and strictly (Q 0 ,S 0 ,R 0 ) -dissipative, where Q 0 ,S 0 , and R 0 are real matrices of appropriate dimensions with Q 0 and R 0 symmetric. Third, suitable output-based controllers can be designed based on solutions to a set of a linear matrix inequality. Finally, two examples are given to demonstrate the effectiveness of the proposed method.

  2. LHCb trigger streams optimization

    NASA Astrophysics Data System (ADS)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  3. Simulation and modeling of the temporal performance of path-based restoration schemes in planar mesh networks

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.

    2006-12-01

    We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.

  4. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    NASA Astrophysics Data System (ADS)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  5. Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu

    2017-01-01

    In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.

  6. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it; Alfonso, L.

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existingmore » guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.« less

  7. Understanding the types of fraud in claims to South African medical schemes.

    PubMed

    Legotlo, T G; Mutezo, A

    2018-03-28

    Medical schemes play a significant role in funding private healthcare in South Africa (SA). However, the sector is negatively affected by the high rate of fraudulent claims. To identify the types of fraudulent activities committed in SA medical scheme claims. A cross-sectional qualitative study was conducted, adopting a case study strategy. A sample of 15 employees was purposively selected from a single medical scheme administration company in SA. Semi-structured interviews were conducted to collect data from study participants. A thematic analysis of the data was done using ATLAS.ti software (ATLAS.ti Scientific Software Development, Germany). The study population comprised the 17 companies that administer medical schemes in SA. Data were collected from 15 study participants, who were selected from the medical scheme administrator chosen as a case study. The study found that medical schemes were defrauded in numerous ways. The perpetrators of this type of fraud include healthcare service providers, medical scheme members, employees, brokers and syndicates. Medical schemes are mostly defrauded by the submission of false claims by service providers and syndicates. Fraud committed by medical scheme members encompasses the sharing of medical scheme benefits with non-members (card farming) and non-disclosure of pre-existing conditions at the application stage. The study concluded that perpetrators of fraud have found several ways of defrauding SA medical schemes regarding claims. Understanding and identifying the types of fraud events facing medical schemes is the initial step towards establishing methods to mitigate this risk. Future studies should examine strategies to manage fraudulent medical scheme claims.

  8. Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.

    PubMed

    Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin

    2012-06-10

    The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.

  9. Public attitudes toward stuttering in Turkey: probability versus convenience sampling.

    PubMed

    Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun

    2011-12-01

    A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. The solution of the optimization problem of small energy complexes using linear programming methods

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.; Director, L. B.

    2016-11-01

    Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.

  11. Simultaneous optimization of limited sampling points for pharmacokinetic analysis of amrubicin and amrubicinol in cancer patients.

    PubMed

    Makino, Yoshinori; Watanabe, Michiko; Makihara, Reiko Ando; Nokihara, Hiroshi; Yamamoto, Noboru; Ohe, Yuichiro; Sugiyama, Erika; Sato, Hitoshi; Hayashi, Yoshikazu

    2016-09-01

    Limited sampling points for both amrubicin (AMR) and its active metabolite amrubicinol (AMR-OH) were simultaneously optimized using Akaike's information criterion (AIC) calculated by pharmacokinetic modeling. In this pharmacokinetic study, 40 mg/m(2) of AMR was administered as a 5-min infusion on three consecutive days to 21 Japanese lung cancer patients. Blood samples were taken at 0, 0.08, 0.25, 0.5, 1, 2, 4, 8 and 24 h after drug infusion, and AMR and AMR-OH concentrations in plasma were quantitated using a high-performance liquid chromatography. The pharmacokinetic profile of AMR was characterized using a three-compartment model and that of AMR-OH using a one-compartment model following a first-order absorption process. These pharmacokinetic profiles were then integrated into one pharmacokinetic model for simultaneous fitting of AMR and AMR-OH. After fitting to the pharmacokinetic model, 65 combinations of four sampling points from the concentration profiles were evaluated for their AICs. Stepwise regression analysis was applied to select the sampling points for AMR and AMR-OH to predict the area under the concentration-time curves (AUCs) at best. Of the three combinations that yielded favorable AIC values, 0.25, 2, 4 and 8 h yielded the best AUC prediction for both AMR (R(2) = 0.977) and AMR-OH (R(2) = 0.886). The prediction error for AUC was less than 15%. The optimal limited sampling points of AMR and AMR-OH after AMR infusion were found to be 0.25, 2, 4 and 8 h, enabling less frequent blood sampling in further expanded pharmacokinetic studies for both AMR and AMR-OH. © 2016 John Wiley & Sons Australia, Ltd.

  12. Statistical inference for the additive hazards model under outcome-dependent sampling.

    PubMed

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  13. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  14. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  15. Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks

    NASA Astrophysics Data System (ADS)

    Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla

    2016-12-01

    Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.

  16. The Importance of Reagent Lot Registration in External Quality Assurance/Proficiency Testing Schemes.

    PubMed

    Stavelin, Anne; Riksheim, Berit Oddny; Christensen, Nina Gade; Sandberg, Sverre

    2016-05-01

    Providers of external quality assurance (EQA)/proficiency testing schemes have traditionally focused on evaluation of measurement procedures and participant performance and little attention has been given to reagent lot variation. The aim of the present study was to show the importance of reagent lot registration and evaluation in EQA schemes. Results from the Noklus (Norwegian Quality Improvement of Primary Care Laboratories) urine albumin/creatinine ratio (ACR) and prothrombin time international normalized ratio (INR) point-of-care EQA schemes from 2009-2015 were used as examples in this study. The between-participant CV for Afinion ACR increased from 6%-7% to 11% in 3 consecutive surveys. This increase was caused by differences between albumin reagent lots that were also observed when fresh urine samples were used. For the INR scheme, the CoaguChek INR results increased with the production date of the reagent lots, with reagent lot medians increasing from 2.0 to 2.5 INR and from 2.7 to 3.3 INR (from the oldest to the newest reagent lot) for 2 control levels, respectively. These differences in lot medians were not observed when native patient samples were used. Presenting results from different reagent lots in EQA feedback reports can give helpful information to the participants that may explain their deviant EQA results. Information regarding whether the reagent lot differences found in the schemes can affect patient samples is important and should be communicated to the participants as well as to the manufacturers. EQA providers should consider registering and evaluating results from reagent lots. © 2016 American Association for Clinical Chemistry.

  17. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks

    PubMed Central

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-01-01

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT. PMID:26569260

  18. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  19. Optimal parameter estimation with a fixed rate of abstention

    NASA Astrophysics Data System (ADS)

    Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.

    2013-07-01

    The problems of optimally estimating a phase, a direction, and the orientation of a Cartesian frame (or trihedron) with general pure states are addressed. Special emphasis is put on estimation schemes that allow for inconclusive answers or abstention. It is shown that such schemes enable drastic improvements, up to the extent of attaining the Heisenberg limit in some cases, and the required amount of abstention is quantified. A general mathematical framework to deal with the asymptotic limit of many qubits or large angular momentum is introduced and used to obtain analytical results for all the relevant cases under consideration. Parameter estimation with abstention is also formulated as a semidefinite programming problem, for which very efficient numerical optimization techniques exist.

  20. Using Soil Apparent Electrical Conductivity to Optimize Sampling of Soil Penetration Resistance and to Improve the Estimations of Spatial Patterns of Soil Compaction

    PubMed Central

    Siqueira, Glécio Machado; Dafonte, Jorge Dafonte; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; Silva, Ênio Farias França e

    2014-01-01

    This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from −0.36 to −0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging. PMID:25610899