An optimization-based sampling scheme for phylogenetic trees.
Misra, Navodit; Blelloch, Guy; Ravi, R; Schwartz, Russell
2011-11-01
Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for heated versions of some important special cases. We demonstrate the efficiency and versatility of the method by an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.
An Optimization-Based Sampling Scheme for Phylogenetic Trees
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Sampling scheme optimization for diffuse optical tomography based on data and image space rankings
NASA Astrophysics Data System (ADS)
Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong
2016-10-01
We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Yin, Jingjing; Samawi, Hani; Linder, Daniel
2016-07-01
A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
NASA Astrophysics Data System (ADS)
Yan, Hongyong; Yang, Lei; Li, Xiang-Yang
2016-12-01
High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Optimal probabilistic dense coding schemes
NASA Astrophysics Data System (ADS)
Kögler, Roger A.; Neves, Leonardo
2017-04-01
Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.
Interpolation-Free Scanning And Sampling Scheme For Tomographic Reconstructions
NASA Astrophysics Data System (ADS)
Donohue, K. D.; Saniie, J.
1987-01-01
In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Accelerated failure time model under general biased sampling scheme.
Kim, Jane Paik; Sit, Tony; Ying, Zhiliang
2016-07-01
Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Why sampling scheme matters: the effect of sampling scheme on landscape genetic results
Michael K. Schwartz; Kevin S. McKelvey
2008-01-01
There has been a recent trend in genetic studies of wild populations where researchers have changed their sampling schemes from sampling pre-defined populations to sampling individuals uniformly across landscapes. This reflects the fact that many species under study are continuously distributed rather than clumped into obvious "populations". Once individual...
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
[Study on optimal model of hypothetical work injury insurance scheme].
Ye, Chi-yu; Dong, Heng-jin; Wu, Yuan; Duan, Sheng-nan; Liu, Xiao-fang; You, Hua; Hu, Hui-mei; Wang, Lin-hao; Zhang, Xing; Wang, Jing
2013-12-01
To explore an optimal model of hypothetical work injury insurance scheme, which is in line with the wishes of workers, based on the problems in the implementation of work injury insurance in China and to provide useful information for relevant policy makers. Multistage cluster sampling was used to select subjects: first, 9 small, medium, and large enterprises were selected from three cities (counties) in Zhejiang Province, China according to the economic development, transportation, and cooperation; then, 31 workshops were randomly selected from the 9 enterprises. Face-to-face interviews were conducted by trained interviewers using a pre-designed questionnaire among all workers in the 31 workshops. After optimization of hypothetical work injury insurance scheme, the willingness to participate in the scheme increased from 73.87%to 80.96%; the average willingness to pay for the scheme increased from 2.21% (51.77 yuan) to 2.38% of monthly wage (54.93 Yuan); the median willingness to pay for the scheme increased from 1% to 1.2% of monthly wage, but decreased from 35 yuan to 30 yuan. The optimal model of hypothetical work injury insurance scheme covers all national and provincial statutory occupational diseases and work accidents, as well as consultations about occupational diseases. The scheme is supposed to be implemented worldwide by the National Social Security Department, without regional differences. The premium is borne by the state, enterprises, and individuals, and an independent insurance fund is kept in the lifetime personal account for each of insured individuals. The premium is not refunded in any event. Compensation for occupational diseases or work accidents is unrelated to the enterprises of the insured workers but related to the length of insurance. The insurance becomes effective one year after enrollment, while it is put into effect immediately after the occupational disease or accident occurs. The optimal model of hypothetical work injury insurance
An optimized spectral difference scheme for CAA problems
NASA Astrophysics Data System (ADS)
Gao, Junhui; Yang, Zhigang; Li, Xiaodong
2012-05-01
In the implementation of spectral difference (SD) method, the conserved variables at the flux points are calculated from the solution points using extrapolation or interpolation schemes. The errors incurred in using extrapolation and interpolation would result in instability. On the other hand, the difference between the left and right conserved variables at the edge interface will introduce dissipation to the SD method when applying a Riemann solver to compute the flux at the element interface. In this paper, an optimization of the extrapolation and interpolation schemes for the fourth order SD method on quadrilateral element is carried out in the wavenumber space through minimizing their dispersion error over a selected band of wavenumbers. The optimized coefficients of the extrapolation and interpolation are presented. And the dispersion error of the original and optimized schemes is plotted and compared. An improvement of the dispersion error over the resolvable wavenumber range of SD method is obtained. The stability of the optimized fourth order SD scheme is analyzed. It is found that the stability of the 4th order scheme with Chebyshev-Gauss-Lobatto flux points, which is originally weakly unstable, has been improved through the optimization. The weak instability is eliminated completely if an additional second order filter is applied on selected flux points. One and two dimensional linear wave propagation analyses are carried out for the optimized scheme. It is found that in the resolvable wavenumber range the new SD scheme is less dispersive and less dissipative than the original scheme, and the new scheme is less anisotropic for 2D wave propagation. The optimized SD solver is validated with four computational aeroacoustics (CAA) workshop benchmark problems. The numerical results with optimized schemes agree much better with the analytical data than those with the original schemes.
Optimal Symmetric Ternary Quantum Encryption Schemes
NASA Astrophysics Data System (ADS)
Wang, Yu-qi; She, Kun; Huang, Ru-fen; Ouyang, Zhong
2016-11-01
In this paper, we present two definitions of the orthogonality and orthogonal rate of an encryption operator, and we provide a verification process for the former. Then, four improved ternary quantum encryption schemes are constructed. Compared with Scheme 1 (see Section 2.3), these four schemes demonstrate significant improvements in term of calculation and execution efficiency. Especially, under the premise of the orthogonal rate ɛ as secure parameter, Scheme 3 (see Section 4.1) shows the highest level of security among them. Through custom interpolation functions, the ternary secret key source, which is composed of the digits 0, 1 and 2, is constructed. Finally, we discuss the security of both the ternary encryption operator and the secret key source, and both of them show a high level of security and high performance in execution efficiency.
Selecting optimal partitioning schemes for phylogenomic datasets.
Lanfear, Robert; Calcott, Brett; Kainer, David; Mayer, Christoph; Stamatakis, Alexandros
2014-04-17
Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets.
Optimized Multilocus Sequence Typing (MLST) Scheme for Trypanosoma cruzi
Diosque, Patricio; Tomasini, Nicolás; Lauthier, Juan José; Messenger, Louisa Alexandra; Monje Rumi, María Mercedes; Ragone, Paula Gabriela; Alberti-D'Amato, Anahí Maitén; Pérez Brandán, Cecilia; Barnabé, Christian; Tibayrenc, Michel; Lewis, Michael David; Llewellyn, Martin Stephen; Miles, Michael Alexander; Yeo, Matthew
2014-01-01
Trypanosoma cruzi, the aetiological agent of Chagas disease possess extensive genetic diversity. This has led to the development of a plethora of molecular typing methods for the identification of both the known major genetic lineages and for more fine scale characterization of different multilocus genotypes within these major lineages. Whole genome sequencing applied to large sample sizes is not currently viable and multilocus enzyme electrophoresis, the previous gold standard for T. cruzi typing, is laborious and time consuming. In the present work, we present an optimized Multilocus Sequence Typing (MLST) scheme, based on the combined analysis of two recently proposed MLST approaches. Here, thirteen concatenated gene fragments were applied to a panel of T. cruzi reference strains encompassing all known genetic lineages. Concatenation of 13 fragments allowed assignment of all strains to the predicted Discrete Typing Units (DTUs), or near-clades, with the exception of one strain that was an outlier for TcV, due to apparent loss of heterozygosity in one fragment. Monophyly for all DTUs, along with robust bootstrap support, was restored when this fragment was subsequently excluded from the analysis. All possible combinations of loci were assessed against predefined criteria with the objective of selecting the most appropriate combination of between two and twelve fragments, for an optimized MLST scheme. The optimum combination consisted of 7 loci and discriminated between all reference strains in the panel, with the majority supported by robust bootstrap values. Additionally, a reduced panel of just 4 gene fragments displayed high bootstrap values for DTU assignment and discriminated 21 out of 25 genotypes. We propose that the seven-fragment MLST scheme could be used as a gold standard for T. cruzi typing, against which other typing approaches, particularly single locus approaches or systematic PCR assays based on amplicon size, could be compared. PMID:25167160
Optimized multilocus sequence typing (MLST) scheme for Trypanosoma cruzi.
Diosque, Patricio; Tomasini, Nicolás; Lauthier, Juan José; Messenger, Louisa Alexandra; Monje Rumi, María Mercedes; Ragone, Paula Gabriela; Alberti-D'Amato, Anahí Maitén; Pérez Brandán, Cecilia; Barnabé, Christian; Tibayrenc, Michel; Lewis, Michael David; Llewellyn, Martin Stephen; Miles, Michael Alexander; Yeo, Matthew
2014-08-01
Trypanosoma cruzi, the aetiological agent of Chagas disease possess extensive genetic diversity. This has led to the development of a plethora of molecular typing methods for the identification of both the known major genetic lineages and for more fine scale characterization of different multilocus genotypes within these major lineages. Whole genome sequencing applied to large sample sizes is not currently viable and multilocus enzyme electrophoresis, the previous gold standard for T. cruzi typing, is laborious and time consuming. In the present work, we present an optimized Multilocus Sequence Typing (MLST) scheme, based on the combined analysis of two recently proposed MLST approaches. Here, thirteen concatenated gene fragments were applied to a panel of T. cruzi reference strains encompassing all known genetic lineages. Concatenation of 13 fragments allowed assignment of all strains to the predicted Discrete Typing Units (DTUs), or near-clades, with the exception of one strain that was an outlier for TcV, due to apparent loss of heterozygosity in one fragment. Monophyly for all DTUs, along with robust bootstrap support, was restored when this fragment was subsequently excluded from the analysis. All possible combinations of loci were assessed against predefined criteria with the objective of selecting the most appropriate combination of between two and twelve fragments, for an optimized MLST scheme. The optimum combination consisted of 7 loci and discriminated between all reference strains in the panel, with the majority supported by robust bootstrap values. Additionally, a reduced panel of just 4 gene fragments displayed high bootstrap values for DTU assignment and discriminated 21 out of 25 genotypes. We propose that the seven-fragment MLST scheme could be used as a gold standard for T. cruzi typing, against which other typing approaches, particularly single locus approaches or systematic PCR assays based on amplicon size, could be compared.
Evolutionary algorithms for multiobjective and multimodal optimization of diagnostic schemes.
de Toro, Francisco; Ros, Eduardo; Mota, Sonia; Ortega, Julio
2006-02-01
This paper addresses the optimization of noninvasive diagnostic schemes using evolutionary algorithms in medical applications based on the interpretation of biosignals. A general diagnostic methodology using a set of definable characteristics extracted from the biosignal source followed by the specific diagnostic scheme is presented. In this framework, multiobjective evolutionary algorithms are used to meet not only classification accuracy but also other objectives of medical interest, which can be conflicting. Furthermore, the use of both multimodal and multiobjective evolutionary optimization algorithms provides the medical specialist with different alternatives for configuring the diagnostic scheme. Some application examples of this methodology are described in the diagnosis of a specific cardiac disorder-paroxysmal atrial fibrillation.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Effects of sparse sampling schemes on image quality in low-dose CT
Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena
2013-11-15
Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic
Global search acceleration in the nested optimization scheme
NASA Astrophysics Data System (ADS)
Grishagin, Vladimir A.; Israfilov, Ruslan A.
2016-06-01
Multidimensional unconstrained global optimization problem with objective function under Lipschitz condition is considered. For solving this problem the dimensionality reduction approach on the base of the nested optimization scheme is used. This scheme reduces initial multidimensional problem to a family of one-dimensional subproblems being Lipschitzian as well and thus allows applying univariate methods for the execution of multidimensional optimization. For two well-known one-dimensional methods of Lipschitz optimization the modifications providing the acceleration of the search process in the situation when the objective function is continuously differentiable in a vicinity of the global minimum are considered and compared. Results of computational experiments on conventional test class of multiextremal functions confirm efficiency of the modified methods.
A continuous sampling scheme for edge illumination x-ray phase contrast imaging
NASA Astrophysics Data System (ADS)
Hagen, C. K.; Coan, P.; Bravin, A.; Olivo, A.; Diemoz, P. C.
2015-08-01
We discuss an alternative acquisition scheme for edge illumination (EI) x-ray phase contrast imaging based on a continuous scan of the object and compare its performance to that of a previously used scheme, which involved scanning the object in discrete steps rather than continuously. By simulating signals for both continuous and discrete methods under realistic experimental conditions, the effect of the spatial sampling rate is analysed with respect to metrics such as image contrast and accuracy of the retrieved phase shift. Experimental results confirm the theoretical predictions. Despite being limited to a specific example, the results indicate that continuous schemes present advantageous features compared to discrete ones. Not only can they be used to speed up the acquisition but they also prove superior in terms of accurate phase retrieval. The theory and experimental results provided in this study will guide the design of future EI experiments through the implementation of optimized acquisition schemes and sampling rates.
Powered-descent trajectory optimization scheme for Mars landing
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Li, Shihua; Chen, Xisong; Guo, Lei
2013-12-01
This paper presents a trajectory optimization scheme for powered-descent phase of Mars landing with considerations of disturbance. Firstly, θ-D method is applied to design a suboptimal control law with descent model in the absence of disturbance. Secondly, disturbance is estimated by disturbance observer, and the disturbance estimation is as feedforward compensation. Then, semi-global stability analysis of the composite controller consisting of the nonlinear suboptimal controller and the disturbance feedforward compensation is proposed. Finally, to verify the effectiveness of proposed control scheme, an application including relevant simulations on a Mars landing mission is demonstrated.
Attributes mode sampling schemes for international material accountancy verification
Sanborn, J.B.
1982-12-01
This paper addresses the question of detecting falsifications in material balance accountancy reporting by comparing independently measured values to the declared values of a randomly selected sample of items in the material balance. A two-level strategy is considered, consisting of a relatively large number of measurements made at low accuracy, and a smaller number of measurements made at high accuracy. Sampling schemes for both types of measurements are derived, and rigorous proofs supplied that guarantee desired detection probabilities. Sample sizes derived using these methods are sometimes considerably smaller than those calculated previously.
Optimal filtering scheme for unsupervised texture feature extraction
NASA Astrophysics Data System (ADS)
Randen, Trygve; Alvestad, Vidar; Husoy, John H.
1996-02-01
In this paper a technique for unsupervised optimal feature extraction and segmentation for textured images is presented. The image is first divided into cells of equal size, and similarity measures on the autocorrelation functions for the cells are estimated. The similarity measures are used for clustering the image into clusters of cells with similar textures. Autocorrelation estimates for each cluster are then estimated, and two-dimensional texture feature extractors using filters, optimal with respect to the Fisher criterion, are constructed. Further, a model for the feature response at and near the texture borders is developed. This model is used to estimate whether the positions of the detected edges in the image are biased, and a scheme for correcting such bias using morphological dilation is devised. The article is concluded with experimental results for the proposed unsupervised texture segmentation scheme.
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
Spatial location weighted optimization scheme for DC optical tomography.
Zhou, Jun; Bai, Jing; He, Ping
2003-01-27
In this paper, a spatial location weighted gradient-based optimization scheme for reducing the computation burden and increasing the reconstruction precision is stated. The method applies to DC diffusionbased optical tomography, where otherwise the reconstruction suffers slow convergence. The inverse approach employs a weighted steepest descent method combined with a conjugate gradient method. A reverse differentiation method is used to efficiently derive the gradient. The reconstruction results confirm that the spatial location weighted optimization method offers a more efficient approach to the DC optical imaging problem than unweighted method does.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Optimal Numerical Schemes for Compressible Large Eddy Simulations
NASA Astrophysics Data System (ADS)
Edoh, Ayaboe; Karagozian, Ann; Sankaran, Venkateswaran; Merkle, Charles
2014-11-01
The design of optimal numerical schemes for subgrid scale (SGS) models in LES of reactive flows remains an area of continuing challenge. It has been shown that significant differences in solution can arise due to the choice of the SGS model's numerical scheme and its inherent dissipation properties, which can be exacerbated in combustion computations. This presentation considers the individual roles of artificial dissipation, filtering, secondary conservation (Kinetic Energy Preservation), and collocated versus staggered grid arrangements with respect to the dissipation and dispersion characteristics and their overall impact on the robustness and accuracy for time-dependent simulations of relevance to reacting and non-reacting LES. We utilize von Neumann stability analysis in order to quantify these effects and to determine the relative strengths and weaknesses of the different approaches. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PM: Dr. F. Fahroo).
Sample Size Calculation for Clustered Binary Data with Sign Tests Using Different Weighting Schemes
Ahn, Chul; Hu, Fan; Schucany, William R.
2011-01-01
We propose a sample size calculation approach for testing a proportion using the weighted sign test when binary observations are dependent within a cluster. Sample size formulas are derived with nonparametric methods using three weighting schemes: equal weights to observations, equal weights to clusters, and optimal weights that minimize the variance of the estimator. Sample size formulas are derived incorporating intracluster correlation and the variability in cluster sizes. Simulation studies are conducted to evaluate a finite sample performance of the proposed sample size formulas. Empirical powers are generally close to nominal levels. The number of clusters required increases as the imbalance in cluster size increases and the intracluster correlation increases. The estimator using optimal weights yields the smallest sample size estimate among three estimators. For small values of intracluster correlation the sample size estimates derived from the optimal weight estimator are close to that derived from the estimator assigning equal weights to observations. For large values of intracluster correlation, the optimal weight sample size estimate is close to the sample size estimate assigning equal weights to clusters. PMID:21339864
Sample Size Calculation for Clustered Binary Data with Sign Tests Using Different Weighting Schemes.
Ahn, Chul; Hu, Fan; Schucany, William R
2011-02-01
We propose a sample size calculation approach for testing a proportion using the weighted sign test when binary observations are dependent within a cluster. Sample size formulas are derived with nonparametric methods using three weighting schemes: equal weights to observations, equal weights to clusters, and optimal weights that minimize the variance of the estimator. Sample size formulas are derived incorporating intracluster correlation and the variability in cluster sizes. Simulation studies are conducted to evaluate a finite sample performance of the proposed sample size formulas. Empirical powers are generally close to nominal levels. The number of clusters required increases as the imbalance in cluster size increases and the intracluster correlation increases. The estimator using optimal weights yields the smallest sample size estimate among three estimators. For small values of intracluster correlation the sample size estimates derived from the optimal weight estimator are close to that derived from the estimator assigning equal weights to observations. For large values of intracluster correlation, the optimal weight sample size estimate is close to the sample size estimate assigning equal weights to clusters.
Optimizing passive acoustic sampling of bats in forests
Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K
2014-01-01
Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km2 scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363
Variational scheme towards an optimal lifting drive in fluid adhesion.
Dias, Eduardo O; Miranda, José A
2012-10-01
One way of determining the adhesive strength of liquids is provided by a probe-tack test, which measures the force or energy required to pull apart two parallel flat plates separated by a thin fluid film. The vast majority of the existing theoretical and experimental works in fluid adhesion use very viscous fluids, and consider a linear drive L(t)∼Vt with constant lifting plate velocity V. This implies a given energy cost and large lifting force magnitude. One challenging question in this field pertains to what would be the optimal time-dependent drive Lopt(t) for which the adhesion energy would be minimized. We use a variational scheme to systematically search for such Lopt(t). By employing an optimal lifting drive, in addition to saving energy, we verify a significant decrease in the adhesion force peak. The effectiveness of the proposed lifting procedure is checked for both Newtonian and power-law fluids.
Noise and Nonlinear Estimation with Optimal Schemes in DTI
Özcan, Alpay
2010-01-01
In general, the estimation of the diffusion properties for diffusion tensor experiments (DTI) is accomplished via least squares estimation (LSE). The technique requires applying the logarithm to the measurements, which causes bad propagation of errors. Moreover, the way noise is injected to the equations invalidates the least squares estimate as the best linear unbiased estimate. Nonlinear estimation (NE), despite its longer computation time, does not possess any of these problems. However, all of the conditions and optimization methods developed in the past are based on the coefficient matrix obtained in a LSE setup. In this manuscript, nonlinear estimation for DTI is analyzed to demonstrate that any result obtained relatively easily in a linear algebra setup about the coefficient matrix can be applied to the more complicated NE framework. The data, obtained earlier using non–optimal and optimized diffusion gradient schemes, are processed with NE. In comparison with LSE, the results show significant improvements, especially for the optimization criterion. However, NE does not resolve the existing conflicts and ambiguities displayed with LSE methods. PMID:20655681
A new configurational bias scheme for sampling supramolecular structures
De Gernier, Robin; Mognetti, Bortolo M.; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.
2014-12-28
We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.
Towards optimal sampling schedules for integral pumping tests
NASA Astrophysics Data System (ADS)
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
Towards optimal sampling schedules for integral pumping tests.
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations C(av) and mass flow rates M(CP). Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the C(av) estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
Sampling and reconstruction schemes for biomagnetic sensor arrays.
Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio
2002-09-21
In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.
NOTE: Sampling and reconstruction schemes for biomagnetic sensor arrays
NASA Astrophysics Data System (ADS)
Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio
2002-09-01
In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.
Regulatory schemes to achieve optimal flux partitioning in bacterial metabolism
NASA Astrophysics Data System (ADS)
Tang, Lei-Han; Yang, Zhu; Hui, Sheng; Kim, Pan-Jun; Li, Xue-Fei; Hwa, Terence
2012-02-01
The flux balance analysis (FBA) offers a way to compute the optimal performance of a given metabolic network when the maximum incoming flux of nutrient molecules and other essential ingredients for biosynthesis are specified. Here we report a theoretical and computational analysis of the network structure and regulatory interactions in an E. coli cell. An automated scheme is devised to simplify the network topology and to enumerate the independent flux degrees of freedom. The network organization revealed by the scheme enables a detailed interpretation of the three layers of metabolic regulation known in the literature: i) independent transcriptional regulation of biosynthesis and salvage pathways to render the network tree-like under a given nutrient condition; ii) allosteric end-product inhibition of enzyme activity at entry points of synthesis pathways for metabolic flux partitioning according to consumption; iii) homeostasis of currency and carrier compounds to maintain sufficient supply of global commodities. Using the amino-acid synthesis pathways as an example, we show that the FBA result can be reproduced with suitable implementation of the three classes of regulatory interactions with literature evidence.
Optimized contraction scheme for tensor-network states
NASA Astrophysics Data System (ADS)
Xie, Z. Y.; Liao, H. J.; Huang, R. Z.; Xie, H. D.; Chen, J.; Liu, Z. Y.; Xiang, T.
2017-07-01
In the tensor-network framework, the expectation values of two-dimensional quantum states are evaluated by contracting a double-layer tensor network constructed from initial and final tensor-network states. The computational cost of carrying out this contraction is generally very high, which limits the largest bond dimension of tensor-network states that can be accurately studied to a relatively small value. We propose an optimized contraction scheme to solve this problem by mapping the double-layer tensor network onto an intersected single-layer tensor network. This reduces greatly the bond dimensions of local tensors to be contracted and improves dramatically the efficiency and accuracy of the evaluation of expectation values of tensor-network states. It almost doubles the largest bond dimension of tensor-network states whose physical properties can be efficiently and reliably calculated, and it extends significantly the application scope of tensor-network methods.
An optimal performance control scheme for a 3D crane
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Husain, A. R.; Tokhi, M. O.
2016-01-01
This paper presents an optimal performance control scheme for control of a three dimensional (3D) crane system including a Zero Vibration shaper which considers two control objectives concurrently. The control objectives are fast and accurate positioning of a trolley and minimum sway of a payload. A complete mathematical model of a lab-scaled 3D crane is simulated in Simulink. With a specific cost function the proposed controller is designed to cater both control objectives similar to a skilled operator. Simulation and experimental studies on a 3D crane show that the proposed controller has better performance as compared to a sequentially tuned PID-PID anti swing controller. The controller provides better position response with satisfactory payload sway in both rail and trolley responses. Experiments with different payloads and cable lengths show that the proposed controller is robust to changes in payload with satisfactory responses.
Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil
2015-03-17
An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.
NASA Astrophysics Data System (ADS)
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-08-01
{{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
A global optimization algorithm for simulation-based problems via the extended DIRECT scheme
NASA Astrophysics Data System (ADS)
Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang
2015-11-01
This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.
Diffusion spectrum MRI using body-centered-cubic and half-sphere sampling schemes.
Kuo, Li-Wei; Chiang, Wen-Yang; Yeh, Fang-Cheng; Wedeen, Van Jay; Tseng, Wen-Yih Isaac
2013-01-15
The optimum sequence parameters of diffusion spectrum MRI (DSI) on clinical scanners were investigated previously. However, the scan time of approximately 30 min is still too long for patient studies. Additionally, relatively large sampling interval in the diffusion-encoding space may cause aliasing artifact in the probability density function when Fourier transform is undertaken, leading to estimation error in fiber orientations. Therefore, this study proposed a non-Cartesian sampling scheme, body-centered-cubic (BCC), to avoid the aliasing artifact as compared to the conventional Cartesian grid sampling scheme (GRID). Furthermore, the accuracy of DSI with the use of half-sphere sampling schemes, i.e. GRID102 and BCC91, was investigated by comparing to their full-sphere sampling schemes, GRID203 and BCC181, respectively. In results, smaller deviation angle and lower angular dispersion were obtained by using the BCC sampling scheme. The half-sphere sampling schemes yielded angular precision and accuracy comparable to the full-sphere sampling schemes. The optimum b(max) was approximately 4750 s/mm(2) for GRID and 4500 s/mm(2) for BCC. In conclusion, the BCC sampling scheme could be implemented as a useful alternative to the GRID sampling scheme. Combination of BCC and half-sphere sampling schemes, that is BCC91, may potentially reduce the scan time of DSI from 30 min to approximately 14 min while maintaining its precision and accuracy.
NASA Astrophysics Data System (ADS)
Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.
2014-12-01
Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error.
Quantum Optimal Multiple Assignment Scheme for Realizing General Access Structure of Secret Sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
The multiple assignment scheme is to assign one or more shares to single participant so that any kind of access structure can be realized by classical secret sharing schemes. We propose its quantum version including ramp secret sharing schemes. Then we propose an integer optimization approach to minimize the average share size.
NASA Astrophysics Data System (ADS)
Li, Y.; Han, B.; Métivier, L.; Brossier, R.
2016-09-01
We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions.
Initial data sampling in design optimization
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Terry H.
2011-06-01
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
Optimization of light collection scheme for forward hadronic calorimeter for STAR experiment at RHIC
NASA Astrophysics Data System (ADS)
Sergeeva, Maria
2013-10-01
We present the results of the optimization of a light collection scheme for a prototype of a sampling compensated hadronic calorimeter for upgrade of the STAR detector at RHIC (BNL). The absolute light yield and uniformity of light collection were measured with the full scale calorimeter tower for different types of reflecting materials, realistic mechanical tolerances for tower assembly and type of coupling between WLS bars and photo detectors. Measurements were performed with conventional PMTs and silicone photo multipliers. The results of these measurements were used to evaluate the influence of the optical collection scheme on the response of the calorimeter using GEANT4 MC. A large prototype of this calorimeter is presently under construction with the beam test scheduled early next year at FNAL.
Optimal design of a hybridization scheme with a fuel cell using genetic optimization
NASA Astrophysics Data System (ADS)
Rodriguez, Marco A.
Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
SEARCH, blackbox optimization, and sample complexity
Kargupta, H.; Goldberg, D.E.
1996-05-01
The SEARCH (Search Envisioned As Relation and Class Hierarchizing) framework developed elsewhere (Kargupta, 1995; Kargupta and Goldberg, 1995) offered an alternate perspective toward blackbox optimization -- optimization in presence of little domain knowledge. The SEARCH framework investigates the conditions essential for transcending the limits of random enumerative search using a framework developed in terms of relations, classes and partial ordering. This paper presents a summary of some of the main results of that work. A closed form bound on the sample complexity in terms of the cardinality of the relation space, class space, desired quality of the solution and the reliability is presented. This also leads to the identification of the class of order-k delineable problems that can be solved in polynomial sample complexity. These results are applicable to any blackbox search algorithms, including evolutionary optimization techniques.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
An optimized quantum information splitting scheme with multiple controllers
NASA Astrophysics Data System (ADS)
Jiang, Min
2016-12-01
We propose an efficient scheme for splitting multi-qudit information with cooperative control of multiple agents. Each controller is assigned one controlling qudit, and he can monitor the state sharing of all multi-qudit information. Compared with the existing schemes, our scheme requires less resource consumption and approaches higher communication efficiency. In addition, our proposal involves only generalized Bell-state measurement, single-qudit measurement, one-qudit gates and a unitary-reduction operation, which makes it flexible and achievable for physical implementation.
Design of optimally smoothing multistage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Lee, Wen-Tzong; Roe, Philip L.; Powell, Kenneth G.; Tai, Chang-Hsien
1992-01-01
A recently derived local preconditioning of the Euler equations is shown to be useful in developing multistage schemes suited for multigrid use. The effect of the preconditioning matrix on the spatial Euler operator is to equalize the characteristic speeds. When applied to the discretized Euler equations, the preconditioning has the effect of strongly clustering the operator's eigenvalues in the complex plane. This makes possible the development of explicit marching schemes that effectively damp most high-frequency Fourier modes, as desired in multigrid applications. The technique is the same as developed earlier for scalar convection schemes: placement of the zeros of the amplification factor of the multistage scheme in locations where eigenvalues corresponding to high-frequency modes abound.
Inferring speciation and extinction rates under different sampling schemes.
Höhna, Sebastian; Stadler, Tanja; Ronquist, Fredrik; Britton, Tom
2011-09-01
The birth-death process is widely used in phylogenetics to model speciation and extinction. Recent studies have shown that the inferred rates are sensitive to assumptions about the sampling probability of lineages. Here, we examine the effect of the method used to sample lineages. Whereas previous studies have assumed random sampling (RS), we consider two extreme cases of biased sampling: "diversified sampling" (DS), where tips are selected to maximize diversity and "cluster sampling (CS)," where sample diversity is minimized. DS appears to be standard practice, for example, in analyses of higher taxa, whereas CS may occur under special circumstances, for example, in studies of geographically defined floras or faunas. Using both simulations and analyses of empirical data, we show that inferred rates may be heavily biased if the sampling strategy is not modeled correctly. In particular, when a diversified sample is treated as if it were a random or complete sample, the extinction rate is severely underestimated, often close to 0. Such dramatic errors may lead to serious consequences, for example, if estimated rates are used in assessing the vulnerability of threatened species to extinction. Using Bayesian model testing across 18 empirical data sets, we show that DS is commonly a better fit to the data than complete, random, or cluster sampling (CS). Inappropriate modeling of the sampling method may at least partly explain anomalous results that have previously been attributed to variation over time in birth and death rates.
Effect of control sampling rates on model-based manipulator control schemes
NASA Technical Reports Server (NTRS)
Khosla, P. K.
1987-01-01
The effect of changing the control sampling period on the performance of the computed-torque and independent joint control schemes is discussed. While the former utilizes the complete dynamics model of the manipulator, the latter assumes a decoupled and linear model of the manipulator dynamics. Researchers discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory tracking performance. Experiments show that within each scheme the trajectory tracking accuracy varies slightly with the change of the sampling rate. However, at low sampling rates the computed-torque scheme outperforms the independent joint control scheme. Based on experimental results, researchers also conclusively establish the importance of high sampling rates as they result in an increased stiffness of the system.
Optimization schemes for the inversion of Bouguer gravity anomalies
NASA Astrophysics Data System (ADS)
Zamora, Azucena
associated with structural changes [16]; therefore, it complements those geophysical methods with the same depth resolution that sample a different physical property (e.g. electromagnetic surveys sampling electric conductivity) or even those with different depth resolution sampling an alternative physical property (e.g. large scale seismic reflection surveys imaging the crust and top upper mantle using seismic velocity fields). In order to improve the resolution of Bouguer gravity anomalies, and reduce their ambiguity and uncertainty for the modeling of the shallow crust, we propose the implementation of primal-dual interior point methods for the optimization of density structure models through the introduction of physical constraints for transitional areas obtained from previously acquired geophysical data sets. This dissertation presents in Chapter 2 an initial forward model implementation for the calculation of Bouguer gravity anomalies in the Porphyry Copper-Molybdenum (Cu-Mo) Copper Flat Mine region located in Sierra County, New Mexico. In Chapter 3, we present a constrained optimization framework (using interior-point methods) for the inversion of 2-D models of Earth structures delineating density contrasts of anomalous bodies in uniform regions and/or boundaries between layers in layered environments. We implement the proposed algorithm using three different synthetic gravitational data sets with varying complexity. Specifically, we improve the 2-dimensional density structure models by getting rid of unacceptable solutions (geologically unfeasible models or those not satisfying the required constraints) given the reduction of the solution space. Chapter 4 shows the results from the implementation of our algorithm for the inversion of gravitational data obtained from the area surrounding the Porphyry Cu-Mo Cooper Flat Mine in Sierra County, NM. Information obtained from previous induced polarization surveys and core samples served as physical constraints for the
Optimal convergence rate of the explicit finite difference scheme for American option valuation
NASA Astrophysics Data System (ADS)
Hu, Bei; Liang, Jin; Jiang, Lishang
2009-08-01
An optimal convergence rate O([Delta]x) for an explicit finite difference scheme for a variational inequality problem is obtained under the stability condition using completely PDE methods. As a corollary, a binomial tree scheme of an American put option (where ) is convergent unconditionally with the rate O(([Delta]t)1/2).
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Optimization of reference library used in content-based medical image retrieval scheme
Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin
2007-11-15
Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A{sub z}) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A{sub z}=0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A{sub z}=0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance.
Hybrid optimization schemes for simulation-based problems.
Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.
2010-05-01
The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.
Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young
2014-01-01
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241
Resource optimization scheme for multimedia-enabled wireless mesh networks.
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young
2014-08-08
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.
Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI
Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid
2017-01-01
Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329
Design of multishell sampling schemes with uniform coverage in diffusion MRI.
Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid
2013-06-01
In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. Copyright © 2012 Wiley Periodicals, Inc.
Efficient multiobjective optimization scheme for large scale structures
NASA Astrophysics Data System (ADS)
Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.
1992-09-01
This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.
Sampling scheme for pyrethroids on multiple surfaces on commercial aircrafts
MOHAN, KRISHNAN R.; WEISEL, CLIFFORD P.
2015-01-01
A wipe sampler for the collection of permethrin from soft and hard surfaces has been developed for use in aircraft. “Disinsection” or application of pesticides, predominantly pyrethrods, inside commercial aircraft is routinely required by some countries and is done on an as-needed basis by airlines resulting in potential pesticide dermal and inhalation exposures to the crew and passengers. A wipe method using filter paper and water was evaluated for both soft and hard aircraft surfaces. Permethrin was analyzed by GC/MS after its ultrasonication extraction from the sampling medium into hexane and volume reduction. Recoveries, based on spraying known levels of permethrin, were 80–100% from table trays, seat handles and rugs; and 40–50% from seat cushions. The wipe sampler is easy to use, requires minimum training, is compatible with the regulations on what can be brought through security for use on commercial aircraft, and readily adaptable for use in residential and other settings. PMID:19756041
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Constrained optimization schemes for geophysical inversion of seismic data
NASA Astrophysics Data System (ADS)
Sosa Aguirre, Uram Anibal
Many experimental techniques in geophysics advance the understanding of Earth processes by estimating and interpreting Earth structure (e.g., velocity and/or density structure). These techniques use different types of geophysical data which can be collected and analyzed separately, sometimes resulting in inconsistent models of the Earth depending on data quality, methods and assumptions made. This dissertation presents two approaches for geophysical inversion of seismic data based on constrained optimization. In one approach we expand a one dimensional (1-D) joint inversion least-squares (LSQ) algorithm by introducing a constrained optimization methodology. Then we use the 1-D inversion results to produce 3-D Earth velocity structure models. In the second approach, we provide a unified constrained optimization framework for solving a 1-D inverse wave propagation problem. In Chapter 2 we present a constrained optimization framework for joint inversion. This framework characterizes 1-D Earth's structure by using seismic shear wave velocities as a model parameter. We create two geophysical synthetic data sets sensitive to shear velocities, namely receiver function and surface wave dispersion. We validate our approach by comparing our numerical results with a traditional unconstrained method, and also we test our approach robustness in the presence of noise. Chapter 3 extends this framework to include an interpolation technique for creating 3-D Earth velocity structure models of the Rio Grande Rift region. Chapter 5 introduces the joint inversion of multiple data sets by adding delay travel times information in a synthetic setup, and leave the posibility to include more data sets. Finally, in Chapter 4 we pose a 1-D inverse full-waveform propagation problem as a PDE-constrained optimization program, where we invert for the material properties in terms of shear wave velocities throughout the physical domain. We facilitate the implementation and comparison of different
A new sampling scheme for tropical forest monitoring using satellite imagery
Frederic Achard; Tim Richards; Javier Gallego
2000-01-01
At the global level, a sampling scheme for tropical forest change assessment, using high resolution satellite images, has been defined using sampling units independent of any particular satellite sensor. For this purpose, a sampling frame has been chosen a hexagonal tessellation of 3,600 km².
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry.
Design of optimally smoothing multi-stage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Tai, Chang-Hsien; Powell, Kenneth G.
1989-01-01
In this paper, a method is developed for designing multi-stage schemes that give optimal damping of high-frequencies for a given spatial-differencing operator. The objective of the method is to design schemes that combine well with multi-grid acceleration. The schemes are tested on a nonlinear scalar equation, and compared to Runge-Kutta schemes with the maximum stable time-step. The optimally smoothing schemes perform better than the Runge-Kutta schemes, even on a single grid. The analysis is extended to the Euler equations in one space-dimension by use of 'characteristic time-stepping', which preconditions the equations, removing stiffness due to variations among characteristic speeds. Convergence rates independent of the number of cells in the finest grid are achieved for transonic flow with and without a shock. Characteristic time-stepping is shown to be preferable to local time-stepping, although use of the optimally damping schemes appears to enhance the performance of local time-stepping. The extension of the analysis to the two-dimensional Euler equations is hampered by the lack of a model for characteristic time-stepping in two dimensions. Some results for local time-stepping are presented.
Effect of different sampling schemes on the spatial placement of conservation reserves in Utah, USA
Bassett, S.D.; Edwards, T.C.
2003-01-01
We evaluated the effect of three different sampling schemes used to organize spatially explicit biological information had on the spatial placement of conservation reserves in Utah, USA. The three sampling schemes consisted of a hexagon representation developed by the EPA/EMAP program (statistical basis), watershed boundaries (ecological), and the current county boundaries of Utah (socio-political). Four decision criteria were used to estimate effects, including amount of area, length of edge, lowest number of contiguous reserves, and greatest number of terrestrial vertebrate species covered. A fifth evaluation criterion was the effect each sampling scheme had on the ability of the modeled conservation reserves to cover the six major ecoregions found in Utah. Of the three sampling schemes, county boundaries covered the greatest number of species, but also created the longest length of edge and greatest number of reserves. Watersheds maximized species coverage using the least amount of area. Hexagons and watersheds provide the least amount of edge and fewest number of reserves. Although there were differences in area, edge and number of reserves among the sampling schemes, all three schemes covered all the major ecoregions in Utah and their inclusive biodiversity. ?? 2003 Elsevier Science Ltd. All rights reserved.
RD-optimized competition scheme for efficient motion prediction
NASA Astrophysics Data System (ADS)
Jung, J.; Laroche, G.; Pesquet, B.
2007-01-01
H.264/MPEG4-AVC is the latest video codec provided by the Joint Video Team, gathering ITU-T and ISO/IEC experts. Technically there are no drastic changes compared to its predecessors H.263 and MPEG-4 part 2. It however significantly reduces the bitrate and seems to be progressively adopted by the market. The gain mainly results from the addition of efficient motion compensation tools, variable block sizes, multiple reference frames, 1/4-pel motion accuracy and powerful Skip and Direct modes. A close study of the bits repartition in the bitstream reveals that motion information can represent up to 40% of the total bitstream. As a consequence reduction of motion cost is a priority for future enhancements. This paper proposes a competition-based scheme for the prediction of the motion. It impacts the selection of the motion vectors, based on a modified rate-distortion criterion, for the Inter modes and for the Skip mode. Combined spatial and temporal predictors take benefit of temporal redundancies, where the spatial median usually fails. An average 7% bitrate saving compared to a standard H.264/MPEG4-AVC codec is reported. In addition, on the fly adaptation of the set of predictors is proposed and preliminary results are provided.
Optimal subgrid scheme for shell models of turbulence
NASA Astrophysics Data System (ADS)
Biferale, Luca; Mailybaev, Alexei A.; Parisi, Giorgio
2017-04-01
We discuss a theoretical framework to define an optimal subgrid closure for shell models of turbulence. The closure is based on the ansatz that consecutive shell multipliers are short-range correlated, following the third hypothesis of Kolmogorov formulated for similar quantities for the original three-dimensional Navier-Stokes turbulence. We also propose a series of systematic approximations to the optimal model by assuming different degrees of correlations across scales among amplitudes and phases of consecutive multipliers. We show numerically that such low-order closures work well, reproducing all known properties of the large-scale dynamics including anomalous scaling. We found small but systematic discrepancies only for a range of scales close to the subgrid threshold, which do not tend to disappear by increasing the order of the approximation. We speculate that the lack of convergence might be due to a structural instability, at least for the evolution of very fast degrees of freedom at small scales. Connections with similar problems for large eddy simulations of the three-dimensional Navier-Stokes equations are also discussed.
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang
2017-01-01
This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.
Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations
2015-06-01
Neumann analysis of the schemes into account. This work highlights that, for unsteady problems, both dissipation and dispersion errors must be accounted...problems, both dissipation and dispersion errors must be accounted for when selecting optimal Runge-Kutta time integrators. I. Introduction The use of...results to a broader class of high-order temporal and spatial schemes. Specifically, von Neumann analysis is performed to categorize the dissipation and
Zou Xubo; Mathis, W.
2005-08-15
We propose a scheme to realize the optimal universal quantum cloning of the polarization state of the photons in context of a microwave cavity quantum electrodynamics. The scheme is based on the resonant interaction of three-level {lambda}-type atoms with two cavity modes. The operation requires atoms to fly one by one through the cavity. The interaction time between each of the atoms and the cavity is appropriately controlled by using a velocity selector. The scheme is deterministic, and is feasible by the current experimental technology.
NASA Astrophysics Data System (ADS)
Toulorge, T.; Desmet, W.
2012-02-01
We study the performance of methods of lines combining discontinuous Galerkin spatial discretizations and explicit Runge-Kutta time integrators, with the aim of deriving optimal Runge-Kutta schemes for wave propagation applications. We review relevant Runge-Kutta methods from literature, and consider schemes of order q from 3 to 4, and number of stages up to q + 4, for optimization. From a user point of view, the problem of the computational efficiency involves the choice of the best combination of mesh and numerical method; two scenarios are defined. In the first one, the element size is totally free, and a 8-stage, fourth-order Runge-Kutta scheme is found to minimize a cost measure depending on both accuracy and stability. In the second one, the elements are assumed to be constrained to such a small size by geometrical features of the computational domain, that accuracy is disregarded. We then derive one 7-stage, third-order scheme and one 8-stage, fourth-order scheme that maximize the stability limit. The performance of the three new schemes is thoroughly analyzed, and the benefits are illustrated with two examples. For each of these Runge-Kutta methods, we provide the coefficients for a 2 N-storage implementation, along with the information needed by the user to employ them optimally.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Chen, Xia; Li, Biyuan; Xu, Wenjun; Lei, Zhenkun
2017-01-01
We propose an image encryption scheme using chaotic phase masks and cascaded Fresnel transform holography based on a constrained optimization algorithm. In the proposed encryption scheme, the chaotic phase masks are generated by Henon map, and the initial conditions and parameters of Henon map serve as the main secret keys during the encryption and decryption process. With the help of multiple chaotic phase masks, the original image can be encrypted into the form of a hologram. The constrained optimization algorithm makes it possible to retrieve the original image from only single frame hologram. The use of chaotic phase masks makes the key management and transmission become very convenient. In addition, the geometric parameters of optical system serve as the additional keys, which can improve the security level of the proposed scheme. Comprehensive security analysis performed on the proposed encryption scheme demonstrates that the scheme has high resistance against various potential attacks. Moreover, the proposed encryption scheme can be used to encrypt video information. And simulations performed on a video in AVI format have also verified the feasibility of the scheme for video encryption.
Performance analysis and optimization of finite-difference schemes for wave propagation problems
NASA Astrophysics Data System (ADS)
Pirozzoli, Sergio
2007-03-01
In the present paper, we gauge the performance of finite-difference schemes with Runge-Kutta time integration for wave propagation problems by rigorously defining appropriate cost and error metrics in a simple setting represented by the linear advection equation. Optimal values of the grid spacing and of the time step are obtained as a result of a cost minimization (for given error level) procedure. The theory suggests superior performance of high-order schemes when highly accurate solutions are sought for, and in several space dimensions even more. The analysis of the global discretization error shows the occurrence of two (approximately independent) sources of error, associated with the space and time discretizations. The improvement of the performance of finite-difference schemes can then be achieved by trying to separately minimize the two contributions. General guidelines for the design of problem-tailored, optimized schemes are provided, suggesting that significant reductions of the computational cost are in principle possible. The application of the analysis to wave propagation problems in a two-dimensional environment demonstrates that the analysis carried out for the scalar case directly applies to the propagation of monochromatic sound waves. For problems of sound propagation involving disparate length-scales the analysis still provides useful insight for the optimal exploitation of computational resources; however, the actual advantage provided by optimized schemes is not as evident as in the single-scale, scalar case.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... measurements resulting from this sampling scheme. 761.316 Section 761.316 Protection of Environment... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this...
Optimizing the monitoring scheme for groundwater quality in the Lusatian mining region
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Hildmann, Christian; Haubold-Rosar, Michael
2014-05-01
Opencast lignite mining always requires the lowering of the groundwater table. In Lusatia, strong mining activities during the GDR era were associated with low groundwater levels in huge parts of the region. Pyrite (iron sulfide) oxidation in the aerated sediments is the cause for a continuous regional groundwater pollution with sulfates, acids, iron and other metals. The contaminated groundwater poses danger to surface water bodies and may also affect soil quality. Due to the decline of mining activities after the German reunification, groundwater levels have begun to recover towards the pre-mining stage, which aggravates the environmental risks. Given the relevance of the problem and the need for effective remediation measures, it is mandatory to know the temporal and spatial distribution of potential pollutants. The reliability of these space-time models, in turn, relies on a well-designed groundwater monitoring scheme. So far, the groundwater monitoring network in the Lusatian mining region represents a purposive sample in space and time with great variations in the density of monitoring wells. Moreover, groundwater quality in some of the areas that face pronounced increases in groundwater levels is currently not monitored at all. We therefore aim to optimize the monitoring network based on the existing information, taking into account practical aspects such as the land-use dependent need for remedial action. This contribution will discuss the usefulness of approaches for optimizing spatio-temporal mapping with regard to groundwater pollution by iron and aluminum in the Lusatian mining region.
GRASS: A Gradient-Based Random Sampling Scheme for Milano Retinex.
Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo
2017-06-01
Retinex is an early and famous theory attempting to estimate the human color sensation derived from an observed scene. When applied to a digital image, the original implementation of retinex estimates the color sensation by modifying the pixels channel intensities with respect to a local reference white, selected from a set of random paths. The spatial search of the local reference white influences the final estimation. The recent algorithm energy-driven termite retinex (ETR), as well as its predecessor termite retinex, has introduced a new path-based image aware sampling scheme, where the paths depend on local visual properties of the input image. Precisely, the ETR paths transit over pixels with high gradient magnitude that have been proved to be important for the formation of color sensation. Such a sampling method enables the visit of image portions effectively relevant to the estimation of the color sensation, while it reduces the analysis of pixels with less essential and/or redundant data, i.e., the flat image regions. While the ETR sampling scheme is very efficacious in detecting image pixels salient for the color sensation, its computational complexity can be a limit. In this paper, we present a novel Gradient-based RAndom Sampling Scheme that inherits from ETR the image aware sampling principles, but has a lower computational complexity, while similar performance. Moreover, the new sampling scheme can be interpreted both as a path-based scanning and a 2D sampling.
A numerical scheme for optimal transition paths of stochastic chemical kinetic systems
Liu Di
2008-10-01
We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...the sampled data can then be processed with the well-known and highly efficient monostatic FFT imaging algorithm. In this work, the approach is
Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy
Unkelbach, Jan; Zeng, Chuan; Engelsman, Martijn
2013-09-15
Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann
2013-04-01
In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and
Optimization of sampling parameters for standardized exhaled breath sampling.
Doran, Sophie; Romano, Andrea; Hanna, George B
2017-09-05
The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample
Muñoz Maniega, Susana; Bastin, Mark E; Armitage, Paul A
2008-05-01
The choice of the number (N) and orientation of diffusion sampling gradients required to measure accurately the water diffusion tensor remains contentious. Monte Carlo studies have suggested that between 20 and 30 uniformly distributed sampling orientations are required to provide robust estimates of water diffusions parameters. These simulations have not, however, taken into account what effect random subject motion, specifically rotation, might have on optimised gradient schemes, a problem which is especially relevant to clinical diffusion tensor MRI (DT-MRI). Here this question is investigated using Monte Carlo simulations of icosahedral sampling schemes and in vivo data. These polyhedra-based schemes, which have the advantage that large N can be created from optimised subsets of smaller N, appear to be ideal for the study of restless subjects since if scanning needs to be prematurely terminated it should be possible to identify a subset of images that have been acquired with a near optimised sampling scheme. The simulations and in vivo data show that as N increases, the rotational variance of fractional anisotropy (FA) estimates becomes progressively less dependent on the magnitude of subject rotation (alpha), while higher FA values are progressively underestimated as alpha increases. These data indicate that for large subject rotations the B-matrix should be recalculated to provide accurate diffusion anisotropy information.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Optimized finite-difference (DRP) schemes perform poorly for decaying or growing oscillations
NASA Astrophysics Data System (ADS)
Brambley, E. J.
2016-11-01
Computational aeroacoustics often use finite difference schemes optimized to require relatively few points per wavelength; such optimized schemes are often called Dispersion Relation Preserving (DRP). Similar techniques are also used outside aeroacoustics. Here the question is posed: what is the equivalent of points per wavelength for growing or decaying waves, and how well are such waves resolved numerically? Such non-constant-amplitude waves are common in aeroacoustics, such as the exponential decay caused by acoustic linings, the O (1 / r) decay of an expanding spherical wave, and the decay of high-azimuthal-order modes in the radial direction towards the centre of a cylindrical duct. It is shown that optimized spatial derivatives perform poorly for waves that are not of constant amplitude, under performing maximal-order schemes. An equivalent criterion to points per wavelength is proposed for non-constant-amplitude oscillations, reducing to the standard definition for constant-amplitude oscillations and valid even for pure growth or decay with no oscillation. Using this definition, coherent statements about points per wavelength necessary for a given accuracy can be made for maximal-order schemes applied to non-constant-amplitude oscillations. These features are illustrated through a numerical example of a one-dimensional wave propagating through a damping region.
Tandem polymer solar cells: simulation and optimization through a multiscale scheme.
Wei, Fanan; Yao, Ligang; Lan, Fei; Li, Guangyong; Liu, Lianqing
2017-01-01
In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects - optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules - were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor-acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme.
Tandem polymer solar cells: simulation and optimization through a multiscale scheme
Wei, Fanan; Yao, Ligang; Lan, Fei
2017-01-01
In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects – optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules – were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor–acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme. PMID:28144571
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.
TEM10 homodyne detection as an optimal small-displacement and tilt-measurement scheme
NASA Astrophysics Data System (ADS)
Delaubert, V.; Treps, N.; Lassen, M.; Harb, C. C.; Fabre, C.; Lam, P. K.; Bachor, H.-A.
2006-11-01
We present a detailed description of small displacement and tilt measurements of a Gaussian beam using split detectors and TEM10 homodyne detectors. Theoretical analysis and an experimental demonstration of measurements of these two conjugate variables are given. A comparison between the experimental efficiency of each scheme proves that the standard split detection is only 64% efficient relative to the TEM10 homodyne detection, which is optimal for beam displacement and tilt. We also demonstrate experimentally that squeezed light in the appropriate spatial modes allows measurements beyond the quantum noise limit for both types of detectors. Finally, we explain how to choose the detection scheme best adapted to a given application.
K-Optimal Gradient Encoding Scheme for Fourth-Order Tensor-Based Diffusion Profile Imaging.
Alipoor, Mohammad; Gu, Irene Yu-Hua; Mehnert, Andrew; Maier, Stephan E; Starck, Göran
2015-01-01
The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation.
K-Optimal Gradient Encoding Scheme for Fourth-Order Tensor-Based Diffusion Profile Imaging
Alipoor, Mohammad; Gu, Irene Yu-Hua; Mehnert, Andrew; Maier, Stephan E.; Starck, Göran
2015-01-01
The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation. PMID:26451376
Optimal Sampling Strategies for Oceanic Applications
2009-01-01
Bluelink ocean data assimilation system ( BODAS ; Oke et al. 2005; 2008) that underpins BRAN is based on Ensemble Optimal Interpolation (EnOI). EnOI is well...Brassington, D. A. Griffin and A. Schiller, 2008: The Bluelink Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. Oke, P. R., M...1017. [published, refereed] Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. [published, refereed] Sakov, P., and P. R. Oke 2008
Sample size and optimal sample design in tuberculosis surveys
Sánchez-Crespo, J. L.
1967-01-01
Tuberculosis surveys sponsored by the World Health Organization have been carried out in different communities during the last few years. Apart from the main epidemiological findings, these surveys have provided basic statistical data for use in the planning of future investigations. In this paper an attempt is made to determine the sample size desirable in future surveys that include one of the following examinations: tuberculin test, direct microscopy, and X-ray examination. The optimum cluster sizes are found to be 100-150 children under 5 years of age in the tuberculin test, at least 200 eligible persons in the examination for excretors of tubercle bacilli (direct microscopy) and at least 500 eligible persons in the examination for persons with radiological evidence of pulmonary tuberculosis (X-ray). Modifications of the optimum sample size in combined surveys are discussed. PMID:5300008
Three-dimensional acoustic wave equation modeling based on the optimal finite-difference scheme
NASA Astrophysics Data System (ADS)
Cai, Xiao-Hui; Liu, Yang; Ren, Zhi-Ming; Wang, Jian-Min; Chen, Zhi-De; Chen, Ke-Yang; Wang, Cheng
2015-09-01
Generally, FD coefficients can be obtained by using Taylor series expansion (TE) or optimization methods to minimize the dispersion error. However, the TE-based FD method only achieves high modeling precision over a limited range of wavenumbers, and produces large numerical dispersion beyond this range. The optimal FD scheme based on least squares (LS) can guarantee high precision over a larger range of wavenumbers and obtain the best optimization solution at small computational cost. We extend the LS-based optimal FD scheme from two-dimensional (2D) forward modeling to three-dimensional (3D) and develop a 3D acoustic optimal FD method with high efficiency, wide range of high accuracy and adaptability to parallel computing. Dispersion analysis and forward modeling demonstrate that the developed FD method suppresses numerical dispersion. Finally, we use the developed FD method to source wavefield extrapolation and receiver wavefield extrapolation in 3D RTM. To decrease the computation time and storage requirements, the 3D RTM is implemented by combining the efficient boundary storage with checkpointing strategies on GPU. 3D RTM imaging results suggest that the 3D optimal FD method has higher precision than conventional methods.
Efficient low-storage Runge-Kutta schemes with optimized stability regions
NASA Astrophysics Data System (ADS)
Niegemann, Jens; Diehl, Richard; Busch, Kurt
2012-01-01
A variety of numerical calculations, especially when considering wave propagation, are based on the method-of-lines, where time-dependent partial differential equations (PDEs) are first discretized in space. For the remaining time-integration, low-storage Runge-Kutta schemes are particularly popular due to their efficiency and their reduced memory requirements. In this work, we present a numerical approach to generate new low-storage Runge-Kutta (LSRK) schemes with optimized stability regions for advection-dominated problems. Adapted to the spectral shape of a given physical problem, those methods are found to yield significant performance improvements over previously known LSRK schemes. As a concrete example, we present time-domain calculations of Maxwell's equations in fully three-dimensional systems, discretized by a discontinuous Galerkin approach.
Sample preparation optimization in fecal metabolic profiling.
Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G
2017-03-15
Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (wf/vs) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.
The optimal sampling strategy for unfamiliar prey.
Sherratt, Thomas N
2011-07-01
Precisely how predators solve the problem of sampling unfamiliar prey types is central to our understanding of the evolution of a variety of antipredator defenses, ranging from Müllerian mimicry to polymorphism. When predators encounter a novel prey item then they must decide whether to take a risk and attack it, thereby gaining a potential meal and valuable information, or avoid such prey altogether. Moreover, if predators initially attack the unfamiliar prey, then at some point(s) they should decide to cease sampling if evidence mounts that the type is on average unprofitable to attack. Here, I cast this problem as a "two-armed bandit," the standard metaphor for exploration-exploitation trade-offs. I assume that as predators encounter and attack unfamiliar prey they use Bayesian inference to update both their beliefs as to the likelihood that individuals of this type are chemically defended, and the probability of seeing the prey type in the future. I concurrently use dynamic programming to identify the critical informational states at which predator should cease sampling. The model explains why predators sample more unprofitable prey before complete rejection when the prey type is common and explains why predators exhibit neophobia when the unfamiliar prey type is perceived to be rare. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Optimization of Dengue Epidemics: A Test Case with Different Discretization Schemes
NASA Astrophysics Data System (ADS)
Rodrigues, Helena Sofia; Monteiro, M. Teresa T.; Torres, Delfim F. M.
2009-09-01
The incidence of Dengue epidemiologic disease has grown in recent decades. In this paper an application of optimal control in Dengue epidemics is presented. The mathematical model includes the dynamic of Dengue mosquito, the affected persons, the people's motivation to combat the mosquito and the inherent social cost of the disease, such as cost with ill individuals, educations and sanitary campaigns. The dynamic model presents a set of nonlinear ordinary differential equations. The problem was discretized through Euler and Runge Kutta schemes, and solved using nonlinear optimization packages. The computational results as well as the main conclusions are shown.
Tank waste remediation system optimized processing strategy with an altered treatment scheme
Slaathaug, E.J.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility.
High-order sampling schemes for path integrals and Gaussian chain simulations of polymers
Müser, Martin H.; Müller, Marcus
2015-05-07
In this work, we demonstrate that path-integral schemes, derived in the context of many-body quantum systems, benefit the simulation of Gaussian chains representing polymers. Specifically, we show how to decrease discretization corrections with little extra computation from the usual O(1/P{sup 2}) to O(1/P{sup 4}), where P is the number of beads representing the chains. As a consequence, high-order integrators necessitate much smaller P than those commonly used. Particular emphasis is placed on the questions of how to maintain this rate of convergence for open polymers and for polymers confined by a hard wall as well as how to ensure efficient sampling. The advantages of the high-order sampling schemes are illustrated by studying the surface tension of a polymer melt and the interface tension in a binary homopolymers blend.
The optimization on flow scheme of helium liquefier with genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.
2017-01-01
There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.
NASA Astrophysics Data System (ADS)
Khayyer, Abbas; Gotoh, Hitoshi; Shimizu, Yuma
2017-03-01
The paper provides a comparative investigation on accuracy and conservation properties of two particle regularization schemes, namely, the Dynamic Stabilization (DS) [1] and generalized Particle Shifting (PS) [2] schemes in simulations of both internal and free-surface flows in ISPH (Incompressible SPH) context. The paper also presents an Optimized PS (OPS) scheme for accurate and consistent implementation of particle shifting for free-surface flows. In contrast to PS, the OPS does not contain any tuning parameters for free-surface, consistently resulting in perfect elimination of shifting normal to an interface and resolves the unphysical discontinuity beneath the interface, seen in PS results.
Comparison of rainfall sampling schemes using a calibrated stochastic rainfall generator
Welles, E.
1994-12-31
Accurate rainfall measurements are critical to river flow predictions. Areal and gauge rainfall measurements create different descriptions of the same storms. The purpose of this study is to characterize those differences. A stochastic rainfall generator was calibrated using an automatic search algorithm. Statistics describing several rainfall characteristics of interest were used in the error function. The calibrated model was then used to generate storms which were exhaustively sampled, sparsely sampled and sampled areally with 4 x 4 km grids. The sparsely sampled rainfall was also kriged to 4 x 4 km blocks. The differences between the four schemes were characterized by comparing statistics computed from each of the sampling methods. The possibility of predicting areal statistics from gauge statistics was explored. It was found that areally measured storms appeared to move more slowly, appeared larger, appeared less intense and have shallower intensity gradients.
Vonderheide, Anne P; Kauffman, Peter E; Hieber, Thomas E; Brisbin, Judith A; Melnyk, Lisa Jo; Morgan, Jeffrey N
2009-03-25
Analysis of an individual's total daily food intake may be used to determine aggregate dietary ingestion of given compounds. However, the resulting composite sample represents a complex mixture, and measurement of such can often prove to be difficult. In this work, an analytical scheme was developed for the determination of 12 select pyrethroid pesticides in dietary samples. In the first phase of the study, several cleanup steps were investigated for their effectiveness in removing interferences in samples with a range of fat content (1-10%). Food samples were homogenized in the laboratory, and preparatory techniques were evaluated through recoveries from fortified samples. The selected final procedure consisted of a lyophilization step prior to sample extraction. A sequential 2-fold cleanup procedure of the extract included diatomaceous earth for removal of lipid components followed with a combination of deactivated alumina and C(18) for the simultaneous removal of polar and nonpolar interferences. Recoveries from fortified composite diet samples (10 microg kg(-1)) ranged from 50.2 to 147%. In the second phase of this work, three instrumental techniques [gas chromatography-microelectron capture detection (GC-microECD), GC-quadrupole mass spectrometry (GC-quadrupole-MS), and GC-ion trap-MS/MS] were compared for greatest sensitivity. GC-quadrupole-MS operated in selective ion monitoring (SIM) mode proved to be most sensitive, yielding method detection limits of approximately 1 microg kg(-1). The developed extraction/instrumental scheme was applied to samples collected in an exposure measurement field study. The samples were fortified and analyte recoveries were acceptable (75.9-125%); however, compounds coextracted from the food matrix prevented quantitation of four of the pyrethroid analytes in two of the samples considered.
Xing, Changhu; Jensen, Colby; Folsom, Charles; Ban, Heng; Marshall, Douglas W.
2014-01-01
In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivitiesof the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.
A genetic algorithm based multi-objective shape optimization scheme for cementless femoral implant.
Chanda, Souptick; Gupta, Sanjay; Kumar Pratihar, Dilip
2015-03-01
The shape and geometry of femoral implant influence implant-induced periprosthetic bone resorption and implant-bone interface stresses, which are potential causes of aseptic loosening in cementless total hip arthroplasty (THA). Development of a shape optimization scheme is necessary to achieve a trade-off between these two conflicting objectives. The objective of this study was to develop a novel multi-objective custom-based shape optimization scheme for cementless femoral implant by integrating finite element (FE) analysis and a multi-objective genetic algorithm (GA). The FE model of a proximal femur was based on a subject-specific CT-scan dataset. Eighteen parameters describing the nature of four key sections of the implant were identified as design variables. Two objective functions, one based on implant-bone interface failure criterion, and the other based on resorbed proximal bone mass fraction (BMF), were formulated. The results predicted by the two objective functions were found to be contradictory; a reduction in the proximal bone resorption was accompanied by a greater chance of interface failure. The resorbed proximal BMF was found to be between 23% and 27% for the trade-off geometries as compared to ∼39% for a generic implant. Moreover, the overall chances of interface failure have been minimized for the optimal designs, compared to the generic implant. The adaptive bone remodeling was also found to be minimal for the optimally designed implants and, further with remodeling, the chances of interface debonding increased only marginally.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
McGregor, David A.
1993-07-01
The purpose of the Human Genome Project is outlined followed by a discussion of electrophoresis in slab gels and capillaries and its application to deoxyribonucleic acid (DNA). Techniques used to modify electroosmotic flow in capillaries are addressed. Several separation and detection schemes for DNA via gel and capillary electrophoresis are described. Emphasis is placed on the elucidation of DNA fragment size in real time and shortening separation times to approximate real time monitoring. The migration of DNA fragment bands through a slab gel can be monitored by UV absorption at 254 nm and imaged by a charge coupled device (CCD) camera. Background correction and immediate viewing of band positions to interactively change the field program in pulsed-field gel electrophoresis are possible throughout the separation. The use of absorption removes the need for staining or radioisotope labeling thereby simplifying sample preparation and reducing hazardous waste generation. This leaves the DNA in its native state and further analysis can be performed without de-staining. The optimization of several parameters considerably reduces total analysis time. DNA from 2 kb to 850 kb can be separated in 3 hours on a 7 cm gel with interactive control of the pulse time, which is 10 times faster than the use of a constant field program. The separation of ΦX174RF DNA-HaeIII fragments is studied in a 0.5% methyl cellulose polymer solution as a function of temperature and applied voltage. The migration times decreased with both increasing temperature and increasing field strength, as expected. The relative migration rates of the fragments do not change with temperature but are affected by the applied field. Conditions were established for the separation of the 271/281 bp fragments, even without the addition of intercalating agents. At 700 V/cm and 20°C, all fragments are separated in less than 4 minutes with an average plate number of 2.5 million per meter.
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
The cognitive mechanisms of optimal sampling.
Lea, Stephen E G; McLaren, Ian P L; Dow, Susan M; Graft, Donald A
2012-02-01
How can animals learn the prey densities available in an environment that changes unpredictably from day to day, and how much effort should they devote to doing so, rather than exploiting what they already know? Using a two-armed bandit situation, we simulated several processes that might explain the trade-off between exploring and exploiting. They included an optimising model, dynamic backward sampling; a dynamic version of the matching law; the Rescorla-Wagner model; a neural network model; and ɛ-greedy and rule of thumb models derived from the study of reinforcement learning in artificial intelligence. Under conditions like those used in published studies of birds' performance under two-armed bandit conditions, all models usually identified the more profitable source of reward, and did so more quickly when the reward probability differential was greater. Only the dynamic programming model switched from exploring to exploiting more quickly when available time in the situation was less. With sessions of equal length presented in blocks, a session-length effect was induced in some of the models by allowing motivational, but not memory, carry-over from one session to the next. The rule of thumb model was the most successful overall, though the neural network model also performed better than the remaining models. Copyright © 2011 Elsevier B.V. All rights reserved.
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Optimal integral sliding mode control scheme based on pseudospectral method for robotic manipulators
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Li, Shihua
2014-06-01
For a multi-input multi-output nonlinear system, an optimal integral sliding mode control scheme based on pseudospectral method is proposed in this paper. And the controller is applied on rigid robotic manipulators with constraints. First, a general form of integral sliding mode is designed with the aim of restraining disturbance. Then, pseudospectral method is adopted to deal with constrained optimal control problem. In consideration of the benefits of both methods, an optimal integral sliding mode controller is given, which is based on the combination of integral sliding mode and pseudospectral method. The stability analysis shows that the controller can guarantee stability of robotic manipulator system. Simulations show the effectiveness of proposed method.
An optimized node-disjoint multipath routing scheme in mobile ad hoc
NASA Astrophysics Data System (ADS)
Yu, Yang; Liang, Mangui; Liu, Zhiyu
2016-02-01
In mobile ad hoc networks (MANETs), link failures are caused frequently because of node’s mobility and use of unreliable wireless channels for data transmission. Multipath routing strategy can cope with the problem of the traffic overloads while balancing the network resource consumption. In the paper, an optimized node-disjoint multipath routing (ONMR) protocol based on ad hoc on-demand vector (AODV) is proposed to establish effective multipath to enhance the network reliability and robustness. The scheme combines the characteristics of reverse AODV (R-AODV) strategy and on-demand node-disjoint multipath routing protocol to determine available node-disjoint routes with minimum routing control overhead. Meanwhile, it adds the backup routing strategy to make the process of data salvation more efficient in case of link failure. The results obtained through various simulations show the effectiveness of the proposed scheme in terms of route availability, control overhead and packet delivery ratio.
Tzanos, C.P.
1981-12-01
Maximum cladding temperatures in heterogeneous liquid-metal fast breeder reactors (LMFBRs) can be reduced if the flow allocation between core and blanket assemblies is continuously varied during burnup. An analytical model has been developed that optimizes the time variation of the flow such that the reduction in maximum cladding temperatures is maximized. In addition, the concept of continuously varying the flow allocation between core and blanket assemblies has been evaluated for different fuel management schemes in a low sodium void reactivity 3000-MW heterogeneous LMFBR. This evaluation shows that the reduction in maximum cladding midwall temperatures is small ( about 10/sup 0/C) if the reactor is partially refueled at the end of each burnup cycle (cycle length of one year), and this reduction is increased to 20/sup 0/C if a straight burn fuel scheme is used with a core and internal blanket fuel residence time of two years.
Optimization of sampled imaging system with baseband response squeeze model
NASA Astrophysics Data System (ADS)
Yang, Huaidong; Chen, Kexin; Huang, Xingyue; He, Qingsheng; Jin, Guofan
2008-03-01
When evaluating or designing a sampled imager, a comprehensive analysis is necessary and a trade-off among optics, photoelectric detector and display technique is inevitable. A new method for sampled imaging system evaluation and optimization is developed in this paper. By extension of MTF in sampled imaging system, inseparable parameters of a detector are taken into account and relations among optics, detector and display are revealed. To measure the artifacts of sampling, the Baseband Response Squeeze model, which will impose a penalty for undersampling, is clarified. Taken the squeezed baseband response and its cutoff frequency for favorable criterion, the method is competent not only for evaluating but also for optimizing sampled imaging system oriented either to single task or to multi-task. The method is applied to optimize a typical sampled imaging system. a sensitivity analysis of various detector parameters is performed and the resulted guidelines are given.
Optimizing sparse sampling for 2D electronic spectroscopy
NASA Astrophysics Data System (ADS)
Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias
2017-02-01
We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-07-30
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.
Optimal sample size allocation for Welch's test in one-way heteroscedastic ANOVA.
Shieh, Gwowen; Jan, Show-Li
2015-06-01
The determination of an adequate sample size is a vital aspect in the planning stage of research studies. A prudent strategy should incorporate all of the critical factors and cost considerations into sample size calculations. This study concerns the allocation schemes of group sizes for Welch's test in a one-way heteroscedastic ANOVA. Optimal allocation approaches are presented for minimizing the total cost while maintaining adequate power and for maximizing power performance for a fixed cost. The commonly recommended ratio of sample sizes is proportional to the ratio of the population standard deviations or the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Detailed numerical investigations have shown that these usual allocation methods generally do not give the optimal solution. The suggested procedures are illustrated using an example of the cost-efficiency evaluation in multidisciplinary pain centers.
Sampling optimization for printer characterization by direct search.
Bianco, Simone; Schettini, Raimondo
2012-12-01
Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.
In-depth analysis of sampling optimization methods
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Kim, Myoungsoo; Habets, Boris; Buhl, Stefan; Guhlemann, Steffen; Rößiger, Martin; Bellmann, Enrico; Kim, Seop
2016-03-01
High order overlay and alignment models require good coverage of overlay or alignment marks on the wafer. But dense sampling plans are not possible for throughput reasons. Therefore, sampling plan optimization has become a key issue. We analyze the different methods for sampling optimization and discuss the different knobs to fine-tune the methods to constraints of high volume manufacturing. We propose a method to judge sampling plan quality with respect to overlay performance, run-to-run stability and dispositioning criteria using a number of use cases from the most advanced lithography processes.
Cao, Tong; Chen, Liao; Yu, Yu; Zhang, Xinliang
2014-12-29
We propose and experimentally demonstrate a novel scheme which can simultaneously realize wavelength-preserving and phase-preserving amplitude noise compression of a 40 Gb/s distorted non-return-to-zero differential-phase-shift keying (NRZ-DPSK) signal. In the scheme, two semiconductor optical amplifiers (SOAs) are exploited. The first one (SOA1) is used to generate the inverted signal based on SOA's transient cross-phase modulation (T-XPM) effect and the second one (SOA2) to regenerate the distorted NRZ-DPSK signal using SOA's cross-gain compression (XGC) effect. In the experiment, the bit error ratio (BER) measurements show that power penalties of constructive and destructive demodulation at BER of 10^{-9} are -1.75 and -1.01 dB, respectively. As the nonlinear effects and the requirements of the two SOAs are completely different, quantum-well (QW) structures has been separately optimized. A complicated theoretical model by combining QW band structure calculation with SOA's dynamic model is exploited to optimize the SOAs, in which both interband effect (carrier density variation) and intraband effect (carrier temperature variation) are taken into account. Regarding SOA1, we choose the tensile strained QW structure and large optical confinement factor to enhance the T-XPM effect. Regarding SOA2, the compressively strained QW structure is selected to reduce the impact of excess phase noise induced by amplitude fluctuations. Exploiting the optimized QW SOAs, better amplitude regeneration performance is demonstrated successfully through numerical simulation. The proposed scheme is intrinsically stable comparing with the interferometer structure and can be integrated on a chip, making it a practical candidate for all-optical amplitude regeneration of high-speed NRZ-DPSK signal.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Fault isolation filter for networked control system with event-triggered sampling scheme.
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee
This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2017-03-01
This paper presents an approximate optimal control of nonlinear continuous-time systems in affine form by using the adaptive dynamic programming (ADP) with event-sampled state and input vectors. The knowledge of the system dynamics is relaxed by using a neural network (NN) identifier with event-sampled inputs. The value function, which becomes an approximate solution to the Hamilton-Jacobi-Bellman equation, is generated by using event-sampled NN approximator. Subsequently, the NN identifier and the approximated value function are utilized to obtain the optimal control policy. Both the identifier and value function approximator weights are tuned only at the event-sampled instants leading to an aperiodic update scheme. A novel adaptive event sampling condition is designed to determine the sampling instants, such that the approximation accuracy and the stability are maintained. A positive lower bound on the minimum inter-sample time is guaranteed to avoid accumulation point, and the dependence of inter-sample time upon the NN weight estimates is analyzed. A local ultimate boundedness of the resulting nonlinear impulsive dynamical closed-loop system is shown. Finally, a numerical example is utilized to evaluate the performance of the near-optimal design. The net result is the design of an event-sampled ADP-based controller for nonlinear continuous-time systems.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.
Yuan, Haidong; Fung, Chi-Hang Fred
2015-09-11
Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.
Optimization technology of 9/7 wavelet lifting scheme on DSP*
NASA Astrophysics Data System (ADS)
Chen, Zhengzhang; Yang, Xiaoyuan; Yang, Rui
2007-12-01
Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing, especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote sensing image compression system in future.
Optimal Feedback Scheme and Universal Time Scaling for Hamiltonian Parameter Estimation
NASA Astrophysics Data System (ADS)
Yuan, Haidong; Fung, Chi-Hang Fred
2015-09-01
Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.
Optimal Sampling Strategies for Detecting Zoonotic Disease Epidemics
Ferguson, Jake M.; Langebrake, Jessica B.; Cannataro, Vincent L.; Garcia, Andres J.; Hamman, Elizabeth A.; Martcheva, Maia; Osenberg, Craig W.
2014-01-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests. PMID:24968100
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Optimal sample sizes for Welch's test under various allocation and cost considerations.
Jan, Show-Li; Shieh, Gwowen
2011-12-01
The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
NASA Astrophysics Data System (ADS)
Subramanian, Nithya
the laminate stiffness matrix implements a square fiber model with a fiber volume fraction sample. The calculations to establish the expected values of constraints and fitness values use the Classical Laminate Theory. The non-deterministic constraints enforced include the probability of satisfying the Tsai-Hill failure criterion and the maximum strain limit. The results from a deterministic optimization, optimization under uncertainty using Monte Carlo sampling and Population-Based Sampling are studied. Also, the work investigates the effectiveness of running the fitness analyses in parallel and the sampling scheme in parallel. Overall, the work conducted for this thesis demonstrated the efficacy of the GA with Population-Based Sampling for the focus problem and established improvements over previous implementations of the GA with PBS.
The dependence of optimal fractionation schemes on the spatial dose distribution
NASA Astrophysics Data System (ADS)
Unkelbach, Jan; Craft, David; Salari, Ehsan; Ramakrishnan, Jagdish; Bortfeld, Thomas
2013-01-01
We consider the fractionation problem in radiation therapy. Tumor sites in which the dose-limiting organ at risk (OAR) receives a substantially lower dose than the tumor, bear potential for hypofractionation even if the α/β-ratio of the tumor is larger than the α/β-ratio of the OAR. In this work, we analyze the interdependence of the optimal fractionation scheme and the spatial dose distribution in the OAR. In particular, we derive a criterion under which a hypofractionation regimen is indicated for both a parallel and a serial OAR. The approach is based on the concept of the biologically effective dose (BED). For a hypothetical homogeneously irradiated OAR, it has been shown that hypofractionation is suggested by the BED model if the α/β-ratio of the OAR is larger than α/β-ratio of the tumor times the sparing factor, i.e. the ratio of the dose received by the tumor and the OAR. In this work, we generalize this result to inhomogeneous dose distributions in the OAR. For a parallel OAR, we determine the optimal fractionation scheme by minimizing the integral BED in the OAR for a fixed BED in the tumor. For a serial structure, we minimize the maximum BED in the OAR. This leads to analytical expressions for an effective sparing factor for the OAR, which provides a criterion for hypofractionation. The implications of the model are discussed for lung tumor treatments. It is shown that the model supports hypofractionation for small tumors treated with rotation therapy, i.e. highly conformal techniques where a large volume of lung tissue is exposed to low but nonzero dose. For larger tumors, the model suggests hyperfractionation. We further discuss several non-intuitive interdependencies between optimal fractionation and the spatial dose distribution. For instance, lowering the dose in the lung via proton therapy does not necessarily provide a biological rationale for hypofractionation.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Optimized angle selection for radial sampled NMR experiments
NASA Astrophysics Data System (ADS)
Gledhill, John M.; Joshua Wand, A.
2008-12-01
Sparse sampling offers tremendous potential for overcoming the time limitations imposed by traditional Cartesian sampling of indirectly detected dimensions of multidimensional NMR data. Unfortunately, several otherwise appealing implementations are accompanied by spectral artifacts that have the potential to contaminate the spectrum with false peak intensity. In radial sampling of linked time evolution periods, the artifacts are easily identified and removed from the spectrum if a sufficient set of radial sampling angles is employed. Robust implementation of the radial sampling approach therefore requires optimization of the set of radial sampling angles collected. Here we describe several methods for such optimization. The approaches described take advantage of various aspects of the general simultaneous multidimensional Fourier transform in the analysis of multidimensional NMR data. Radially sampled data are primarily contaminated by ridges extending from authentic peaks. Numerical methods are described that definitively identify artifactual intensity and the optimal set of sampling angles necessary to eliminate it under a variety of scenarios. The algorithms are tested with both simulated and experimentally obtained triple resonance data.
Optimal Food Safety Sampling Under a Budget Constraint.
Powell, Mark R
2014-01-01
Much of the literature regarding food safety sampling plans implicitly assumes that all lots entering commerce are tested. In practice, however, only a fraction of lots may be tested due to a budget constraint. In such a case, there is a tradeoff between the number of lots tested and the number of samples per lot. To illustrate this tradeoff, a simple model is presented in which the optimal number of samples per lot depends on the prevalence of sample units that do not conform to microbiological specifications and the relative costs of sampling a lot and of drawing and testing a sample unit from a lot. The assumed objective is to maximize the number of nonconforming lots that are rejected subject to a food safety sampling budget constraint. If the ratio of the cost per lot to the cost per sample unit is substantial, the optimal number of samples per lot increases as prevalence decreases. However, if the ratio of the cost per lot to the cost per sample unit is sufficiently small, the optimal number of samples per lot reduces to one (i.e., simple random sampling), regardless of prevalence. In practice, the cost per sample unit may be large relative to the cost per lot due to the expense of laboratory testing and other factors. Designing effective compliance assurance measures depends on economic, legal, and other factors in addition to microbiology and statistics. © 2013 Society for Risk Analysis Published 2013. This article is a U.S. Government work and is in the public domain for the U.S.A.
An adapted fan volume sampling scheme for 3-D algebraic reconstruction in linear tomosynthesis
NASA Astrophysics Data System (ADS)
Bleuet, P.; Guillemaud, R.; Desbat, L.; Magnin, I.
2002-10-01
We study the reconstruction process when the X-ray source translates along a finite straight line, the detector moving or not. This process, called linear tomosynthesis, induces a limited angle of view, which causes the vertical spatial resolution to be poor. To improve this resolution, we use iterative algebraic reconstruction methods, which are commonly used for tomographic reconstruction from a reduced number of projections. With noisy projections, such algorithms produce poor quality reconstructions. To prevent this, we use a first object prior knowledge, consisting of piecewise smoothness constraint. To reduce the computation time associated with both reconstruction and regularization processes, we introduce a second geometrical prior knowledge, based on the linear trajectory of the X-ray source. This linear source trajectory allows us to reconstruct a series of two-dimensional (2-D) planes in a fan organization of the volume. Using this adapted fan volume sampling scheme, we reduce the computation time by transforming the initial three-dimensional (3-D) problem into a series of 2-D problems. Obviously, the algorithm becomes directly parallelizable. Focusing on a particular region of interest becomes easier too. The regularization process can easily be implemented with this scheme. We test the algorithm using experimental projections. The quality of the reconstructed object is conserved, while the computation time is considerably reduced, even without any parallelization of the algorithm.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Optimized method for dissolved hydrogen sampling in groundwater.
Alter, Marcus D; Steiof, Martin
2005-06-01
Dissolved hydrogen concentrations are used to characterize redox conditions of contaminated aquifers. The currently accepted and recommended bubble strip method for hydrogen sampling (Wiedemeier et al., 1998) requires relatively long sampling times and immediate field analysis. In this study we present methods for optimized sampling and for sample storage. The bubble strip sampling method was examined for various flow rates, bubble sizes (headspace volume in the sampling bulb) and two different H2 concentrations. The results were compared to a theoretical equilibration model. Turbulent flow in the sampling bulb was optimized for gas transfer by reducing the inlet diameter. Extraction with a 5 mL headspace volume and flow rates higher than 100 mL/min resulted in 95-100% equilibrium within 10-15 min. In order to investigate the storage of samples from the gas sampling bulb gas samples were kept in headspace vials for varying periods. Hydrogen samples (4.5 ppmv, corresponding to 3.5 nM in liquid phase) could be stored up to 48 h and 72 h with a recovery rate of 100.1+/-2.6% and 94.6+/-3.2%, respectively. These results are promising and prove the possibility of storage for 2-3 days before laboratory analysis. The optimized method was tested at a field site contaminated with chlorinated solvents. Duplicate gas samples were stored in headspace vials and analyzed after 24 h. Concentrations were measured in the range of 2.5-8.0 nM corresponding to known concentrations in reduced aquifers.
2014-11-01
content (ie: low- pass response) 1) compare damping character of Artificial Dissipation and Filtering 2) formulate filter as an equivalent...Artificial Dissipation scheme - consequence of filter damping for stiff problems 3) insight on achieving “ideal” low- pass response for general...require very high order for low- pass response – overly dissipative for small time-steps • Implicit filters can be efficiently designed for low- pass
Osai, L.N.
1983-03-01
The Kolo Creek field is a 5 x 10 km size, faulted, rollover structure with the E2.0 reservoir as the main oil-bearing sand. The reservoir is a 200-ft thick, complex, deltaic sandstone package with a 1.9-tcf size gas cap underlain by a 200-ft thick oil rim containing ca 440 x 106 bbl STOIIP. The sand is penetrated by 34 wells, 25 of which are completed as producers. To date a 16% drop in pressure has occurred. A reservoir engineering study, based on the early pressure decline, led to the implementation of water injection scheme. Immediately prior to the initial phase of the scheme, cores were taken in 2 wells. These cores, side wall samples from other wells, and the detailed correlation made possible by a denser well pattern have resulted in a realistic geologic model. This model will influence the optimal location of future injection and production wells based on the structural and sedimentologic characteristics of the reservoir
Menezes, Angela; Woods, Kate; Chanthongthip, Anisone; Dittrich, Sabine; Opoku-Boateng, Agatha; Kimuli, Maimuna; Chalker, Victoria
2016-01-01
Background Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST) data direct from patient specimens while minimising costs for subsequent sequencing. Methodology and Findings An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen types from patients diagnosed with leptospirosis between 2014 and 2015 in the United Kingdom (UK) and the Lao Peoples Democratic Republic (Lao PDR). Of 44 clinical samples (23 serum, 6 whole blood, 3 buffy coat, 12 urine) PCR positive for pathogenic Leptospira spp. at least one allele was amplified in 22 samples (50%) and used for phylogenetic inference. Full allelic profiles were obtained from ten specimens, representing all sample types (23%). No nonspecific amplicons were observed in any of the samples. Of twelve PCR positive urine specimens three gave full allelic profiles (25%) and two a partial profile. Phylogenetic analysis allowed for species assignment. The predominant species detected was L. interrogans (10/14 and 7/8 from UK and Lao PDR, respectively). All other species were detected in samples from only one country (Lao PDR: L. borgpetersenii [1/8]; UK: L. kirschneri [1/14], L. santarosai [1/14], L. weilii [2/14]). Conclusion Typing information of pathogenic Leptospira spp. was obtained directly from a variety of clinical samples using a modified MLST assay. This assay negates the need for time-consuming culture of Leptospira prior to typing and will be of use both in surveillance, as single alleles enable species determination, and outbreaks for the rapid identification of clusters. PMID:27654037
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem
NASA Astrophysics Data System (ADS)
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst—only to start it all over again—may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.
NASA Astrophysics Data System (ADS)
Liu, Feng; Beck, Barbara L.; Fitzsimmons, Jeffrey R.; Blackband, Stephen J.; Crozier, Stuart
2005-11-01
In this paper, numerical simulations are used in an attempt to find optimal source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Liu, Feng; Beck, Barbara L; Fitzsimmons, Jeffrey R; Blackband, Stephen J; Crozier, Stuart
2005-11-21
In this paper, numerical simulations are used in an attempt to find optimal source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B(1)-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem.
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst-only to start it all over again-may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.
Alessandri, Angelo; Gaggero, Mauro; Zoppoli, Riccardo
2012-06-01
Optimal control for systems described by partial differential equations is investigated by proposing a methodology to design feedback controllers in approximate form. The approximation stems from constraining the control law to take on a fixed structure, where a finite number of free parameters can be suitably chosen. The original infinite-dimensional optimization problem is then reduced to a mathematical programming one of finite dimension that consists in optimizing the parameters. The solution of such a problem is performed by using sequential quadratic programming. Linear combinations of fixed and parameterized basis functions are used as the structure for the control law, thus giving rise to two different finite-dimensional approximation schemes. The proposed paradigm is general since it allows one to treat problems with distributed and boundary controls within the same approximation framework. It can be applied to systems described by either linear or nonlinear elliptic, parabolic, and hyperbolic equations in arbitrary multidimensional domains. Simulation results obtained in two case studies show the potentials of the proposed approach as compared with dynamic programming.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
White, S L; Smith, W C; Fisher, L F; Gatlin, C L; Hanasono, G K; Jordan, W H
1998-01-01
Proton pump inhibitors and H2-receptor antagonists suppress gastric acid secretion and secondarily induce hypergastrinemia. Sustained hypergastrinemia has a trophic effect on stomach fundic mucosa, including enterochromaffin-like (ECL) cell hypertrophy and hyperplasia. Histomorphometric quantitation of the pharmacologic gastric effects was conducted on 10 male and 10 female rats treated orally with LY307640 sodium, a proton pump inhibitor, at daily doses of 25, 60, 130, or 300 mg/kg for 3 mo. Histologic sections of glandular stomach, stained for chromogranin A, were evaluated by image analysis to determine stomach mucosal thickness, mucosal and nonmucosal (submucosa and muscularis) area, gastric glandular area, ECL cell number/area and cross-sectional area. Total mucosal and nonmucosal tissue volumes per animal were derived from glandular stomach volumetric and area data. Daily oral doses of compound LY307640 sodium caused slight to moderate dose-related mucosal hypertrophy and ECL cell hypertrophy and hyperplasia in all treatment groups as compared with controls. All observed effects were prominent in both sexes but were generally greater in females. The morphometric sampling schemes were explored to optimize the data collection efficiency for future studies. A comparison between the sampling schemes used in this study and alternative schemes was conducted by estimating the probability of detecting a specific percentage of change between the male control and high-dose groups based on Tukey's trend test. The sampling scheme analysis indicated that mucosal thickness and mass had been oversampled. ECL cell density quantitation efficiency would have been increased by sampling the basal mucosa only for short-term studies. The ECL cell size sampling scheme was deemed appropriate for this type of study.
Stacey, Peter; Butler, Owen
2008-06-01
This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were <50%. This level of under-reporting has significant implications for any health or hygiene studies of the exposure of welders to toxic metals for the types of fumes included in this study. Good laboratories' performance measuring spiked WASP filter samples containing soluble metal salts did not guarantee good performance when measuring the more complex welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.
Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP
NASA Astrophysics Data System (ADS)
Roshani, E.; Berg, A. A.; Lindsay, J.
2013-12-01
Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories
Layered HEVC/H.265 video transmission scheme based on hierarchical QAM optimization
NASA Astrophysics Data System (ADS)
Feng, Weidong; Zhou, Cheng; Xiong, Chengyi; Chen, Shaobo; Wang, Junxi
2015-12-01
High Efficiency Video Coding (HEVC) is the state-of-art video compression standard which fully support scalability features and is able to generate layered video streams with unequal importance. Unfortunately, when the base layer (BL) which is more importance to the stream is lost during the transmission, the enhancement layer (EL) based on the base layer must be discarded by receiver. Obviously, using the same transmittal strategies for BL and EL is unreasonable. This paper proposed an unequal error protection (UEP) system using different hierarchical amplitude modulation (HQAM). The BL data with high priority are mapped into the most reliable HQAM mode and the EL data with low priority are mapped into HQAM mode with fast transmission efficiency. Simulations on scalable HEVC codec show that the proposed optimized video transmission system is more attractive than the traditional equal error protection (EEP) scheme because it effectively balances the transmission efficiency and reconstruction video quality.
NASA Astrophysics Data System (ADS)
Xue, Lulin; Pan, Zaitao
2008-05-01
Carbon exchange between the atmosphere and terrestrial ecosystem is a key component affecting climate changes. Because the in situ measurements are not dense enough to resolve CO2 exchange spatial variation on various scales, the variation has been mainly simulated by numerical ecosystem models. These models contain large uncertainties in estimating CO2 exchange owing to incorporating a number of empirical parameters on different scales. This study applied a global optimization algorithm and ensemble approach to a surface CO2 flux scheme to (1) identify sensitive photosynthetic and respirational parameters, and (2) optimize the sensitive parameters in the modeling sense and improve the model skills. The photosynthetic and respirational parameters of corn (C4 species) and soybean (C3 species) in NCAR land surface model (LSM) are calibrated against observations from AmeriFlux site at Bondville, IL during 1999 and 2000 growing seasons. Results showed that the most sensitive parameters are maximum carboxylation rate at 25°C and its temperature sensitivity parameter (Vcmax25 and avc), quantum efficiency at 25°C (Qe25), temperature sensitivity parameter for maintenance respiration (arm), and temperature sensitivity parameter for microbial respiration (amr). After adopting calibrated parameter values, simulated seasonal averaged CO2 fluxes were improved for both the C4 and the C3 crops (relative bias reduced from 0.09 to -0.02 for the C4 case and from 0.28 to -0.01 for the C3 case). An updated scheme incorporating new parameters and a revised flux-integration treatment is also proposed.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
163 years of refinement: the British Geological Survey sample registration scheme
NASA Astrophysics Data System (ADS)
Howe, M. P.
2011-12-01
The British Geological Survey manages the largest UK geoscience samples collection, including: - 15,000 onshore boreholes, including over 250 km of drillcore - Vibrocores, gravity cores and grab samples from over 32,000 UK marine sample stations. 640 boreholes - Over 3 million UK fossils, including a "type and stratigraphic" reference collection of 250,000 fossils, 30,000 of which are "type, figured or cited" - Comprehensive microfossil collection, including many borehole samples - 290km of drillcore and 4.5 million cuttings samples from over 8000 UK continental shelf hydrocarbon wells - Over one million mineralogical and petrological samples, including 200,00 thin sections The current registration scheme was introduced in 1848 and is similar to that used by Charles Darwin on the Beagle. Every Survey collector or geologist has been issue with a unique prefix code of one or more letters and these were handwritten on preprinted numbers, arranged in books of 1 - 5,000 and 5,001 to 10,000. Similar labels are now computer printed. Other prefix codes are used for corporate collections, such as borehole samples, thin sections, microfossils, macrofossil sections, museum reference fossils, display quality rock samples and fossil casts. Such numbers infer significant immediate information to the curator, without the need to consult detailed registers. The registration numbers have been recorded in a series of over 1,000 registers, complete with metadata including sample ID, locality, horizon, collector and date. Citations are added as appropriate. Parent-child relationships are noted when re-registering subsubsamples. For example, a borehole sample BDA1001 could have been subsampled for a petrological thin section and off-cut (E14159), a fossil thin section (PF365), micropalynological slides (MPA273), one of which included a new holotype (MPK111), and a figured macrofossil (GSE1314). All main corporate collection now have publically-available online databases, such as Palaeo
Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.
Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V
2016-05-03
Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g
Singal, Ashok K.
2014-07-01
We examine the consistency of the unified scheme of Fanaroff-Riley type II radio galaxies and quasars with their observed number and size distributions in the 3CRR sample. We separate the low-excitation galaxies from the high-excitation ones, as the former might not harbor a quasar within and thus may not be partaking in the unified scheme models. In the updated 3CRR sample, at low redshifts (z < 0.5), the relative number and luminosity distributions of high-excitation galaxies and quasars roughly match the expectations from the orientation-based unified scheme model. However, a foreshortening in the observed sizes of quasars, which is a must in the orientation-based model, is not seen with respect to radio galaxies even when the low-excitation galaxies are excluded. This dashes the hope that the unified scheme might still work if one includes only the high-excitation galaxies.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to
Relevance of sampling schemes in light of Ruelle's linear response theory
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Kuna, Tobias; Wouters, Jeroen; Faranda, Davide
2012-05-01
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space-time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering
Yeh, Wei-Chang; Lai, Chyh-Ming
2015-01-01
Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS) and rapid centralized strategy (RCS) in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions. PMID:26348483
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
NASA Astrophysics Data System (ADS)
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-01
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
NASA Astrophysics Data System (ADS)
Kojima, Sadaoki; Zhe, Zhang; Sawada, Hiroshi; Firex Team
2015-11-01
In Fast Ignition Inertial Confinement Fusion, optimization of relativistic electron beam (REB) accelerated by a high-intensity laser pulse is critical for the efficient core heating. The high-energy tail of the electron spectrum is generated by the laser interaction with a long-scale-length plasma and does not efficiently couple to a fuel core. In the cone-in-shell scheme, long-scale-length plasmas can be produced inside the cone by the pedestal of a high-intensity laser, radiation heating of the inner cone wall and shock wave from an implosion core. We have investigated a relation between the presence of pre-plasma inside the cone and the REB energy distribution using the Gekko XII and 2kJ-PW LFEX laser at the Institute of Laser Engineering. The condition of an inner cone wall was monitored using VISAR and SOP systems on a cone-in-shell implosion. The generation of the REB was measured with an electron energy analyzer and a hard x-ray spectrometer on a separate shot by injecting the LFEX laser in an imploded target. The result shows the strong correlation between the preheat and high-energy tail generation. Optimization of cone-wall thickness for the fast-ignition will be discussed. This work is supported by NIFS, MEXT/JSPS KAKENHI Grant and JSPS Fellows (Grant Number 14J06592).
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
NASA Astrophysics Data System (ADS)
Xiao-mei, Gao; Jian-bo, Dai
2017-09-01
In order to control the deformation of the surrounding rock of the large section of the tunnel, the author analyzes the stratum displacement law caused by the tunnel excavation by means of on-site monitoring. And the optimized design of the existing tunnel pre-reinforcement scheme is carried out. The FLAC3D numerical simulation method is used to verify the feasibility of the pre-reinforcement optimization scheme. Field test showed that the tunnel surrounding rock deformation in the allowable range, and the deformation control measures of the surrounding rock are reasonable and effective. The research results have important guiding significance for future similar project.
Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua
2017-04-18
Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.
Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua
2017-01-01
Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218
Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.
Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly
2015-09-01
Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.
Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research
Duan, Naihua; Bhaumik, Dulal K.; Palinkas, Lawrence A.; Hoagwood, Kimberly
2015-01-01
Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research. PMID:25491200
Optimal allocation of point-count sampling effort
Barker, R.J.; Sauer, J.R.; Link, W.A.
1993-01-01
Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
NASA Astrophysics Data System (ADS)
Schwientek, Marc; Guillet, Gaelle; Kuch, Bertram; Rügner, Hermann; Grathwohl, Peter
2014-05-01
Xenobiotic contaminants such as pharmaceuticals or personal care products typically are continuously introduced into the receiving water bodies via wastewater treatment plant (WWTP) outfalls and, episodically, via combined sewer overflows in the case of precipitation events. Little is known about how these chemicals behave in the environment and how they affect ecosystems and human health. Examples of traditional persistent organic pollutants reveal, that they may still be present in the environment even decades after they have been released. In this study a sampling strategy was developed which gives valuable insights into the environmental behaviour of xenobiotic chemicals. The method is based on the Lagrangian sampling scheme by which a parcel of water is sampled repeatedly as it moves downstream while chemical, physical, and hydrologic processes altering the characteristics of the water mass can be investigated. The Steinlach is a tributary of the River Neckar in Southwest Germany with a catchment area of 140 km². It receives the effluents of a WWTP with 99,000 inhabitant equivalents 4 km upstream of its mouth. The varying flow rate of effluents induces temporal patterns of electrical conductivity in the river water which enable to track parcels of water along the subsequent urban river section. These parcels of water were sampled a) close to the outlet of the WWTP and b) 4 km downstream at the confluence with the Neckar. Sampling was repeated at a 15 min interval over a complete diurnal cycle and 2 h composite samples were prepared. A model-based analysis demonstrated, on the one hand, that substances behaved reactively to a varying extend along the studied river section. On the other hand, it revealed that the observed degradation rates are likely dependent on the time of day. Some chemicals were degraded mainly during daytime (e.g. the disinfectant Triclosan or the phosphorous flame retardant TDCP), others as well during nighttime (e.g. the musk fragrance
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
[Optimized sample preparation for metabolome studies on Streptomyces coelicolor].
Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian
2014-04-01
Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Optimization of the combined proton acceleration regime with a target composition scheme
NASA Astrophysics Data System (ADS)
Yao, W. P.; Li, B. W.; Zheng, C. Y.; Liu, Z. J.; Yan, X. Q.; Qiao, B.
2016-01-01
A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CH target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 1022 W cm-2, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.
Optimization of the combined proton acceleration regime with a target composition scheme
Yao, W. P.; Li, B. W.; Zheng, C. Y.; Liu, Z. J.; Yan, X. Q.; Qiao, B.
2016-01-15
A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CH target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 10{sup 22} W cm{sup −2}, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
Classifier-Guided Sampling for Complex Energy System Optimization
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
Optimizing performance of nonparametric species richness estimators under constrained sampling.
Rajakaruna, Harshana; Drake, D Andrew R; T Chan, Farrah; Bailey, Sarah A
2016-10-01
Understanding the functional relationship between the sample size and the performance of species richness estimators is necessary to optimize limited sampling resources against estimation error. Nonparametric estimators such as Chao and Jackknife demonstrate strong performances, but consensus is lacking as to which estimator performs better under constrained sampling. We explore a method to improve the estimators under such scenario. The method we propose involves randomly splitting species-abundance data from a single sample into two equally sized samples, and using an appropriate incidence-based estimator to estimate richness. To test this method, we assume a lognormal species-abundance distribution (SAD) with varying coefficients of variation (CV), generate samples using MCMC simulations, and use the expected mean-squared error as the performance criterion of the estimators. We test this method for Chao, Jackknife, ICE, and ACE estimators. Between abundance-based estimators with the single sample, and incidence-based estimators with the split-in-two samples, Chao2 performed the best when CV < 0.65, and incidence-based Jackknife performed the best when CV > 0.65, given that the ratio of sample size to observed species richness is greater than a critical value given by a power function of CV with respect to abundance of the sampled population. The proposed method increases the performance of the estimators substantially and is more effective when more rare species are in an assemblage. We also show that the splitting method works qualitatively similarly well when the SADs are log series, geometric series, and negative binomial. We demonstrate an application of the proposed method by estimating richness of zooplankton communities in samples of ballast water. The proposed splitting method is an alternative to sampling a large number of individuals to increase the accuracy of richness estimations; therefore, it is appropriate for a wide range of resource
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
Method optimization for fecal sample collection and fecal DNA extraction.
Mathay, Conny; Hamot, Gael; Henry, Estelle; Georges, Laura; Bellora, Camille; Lebrun, Laura; de Witt, Brian; Ammerlaan, Wim; Buschart, Anna; Wilmes, Paul; Betsou, Fay
2015-04-01
This is the third in a series of publications presenting formal method validation for biospecimen processing in the context of accreditation in laboratories and biobanks. We report here optimization of a stool processing protocol validated for fitness-for-purpose in terms of downstream DNA-based analyses. Stool collection was initially optimized in terms of sample input quantity and supernatant volume using canine stool. Three DNA extraction methods (PerkinElmer MSM I®, Norgen Biotek All-In-One®, MoBio PowerMag®) and six collection container types were evaluated with human stool in terms of DNA quantity and quality, DNA yield, and its reproducibility by spectrophotometry, spectrofluorometry, and quantitative PCR, DNA purity, SPUD assay, and 16S rRNA gene sequence-based taxonomic signatures. The optimal MSM I protocol involves a 0.2 g stool sample and 1000 μL supernatant. The MSM I extraction was superior in terms of DNA quantity and quality when compared to the other two methods tested. Optimal results were obtained with plain Sarstedt tubes (without stabilizer, requiring immediate freezing and storage at -20°C or -80°C) and Genotek tubes (with stabilizer and RT storage) in terms of DNA yields (total, human, bacterial, and double-stranded) according to spectrophotometry and spectrofluorometry, with low yield variability and good DNA purity. No inhibitors were identified at 25 ng/μL. The protocol was reproducible in terms of DNA yield among different stool aliquots. We validated a stool collection method suitable for downstream DNA metagenomic analysis. DNA extraction with the MSM I method using Genotek tubes was considered optimal, with simple logistics in terms of collection and shipment and offers the possibility of automation. Laboratories and biobanks should ensure protocol conditions are systematically recorded in the scope of accreditation.
Optimized robust plasma sampling for glomerular filtration rate studies.
Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L
2012-09-01
In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement.
Test samples for optimizing STORM super-resolution microscopy.
Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E
2013-09-06
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Inhibition of viscous fluid fingering: A variational scheme for optimal flow rates
NASA Astrophysics Data System (ADS)
Miranda, Jose; Dias, Eduardo; Alvarez-Lacalle, Enrique; Carvalho, Marcio
2012-11-01
Conventional viscous fingering flow in radial Hele-Shaw cells employs a constant injection rate, resulting in the emergence of branched interfacial shapes. The search for mechanisms to prevent the development of these bifurcated morphologies is relevant to a number of areas in science and technology. A challenging problem is how best to choose the pumping rate in order to restrain growth of interfacial amplitudes. We use an analytical variational scheme to look for the precise functional form of such an optimal flow rate. We find it increases linearly with time in a specific manner so that interface disturbances are minimized. Experiments and nonlinear numerical simulations support the effectiveness of this particularly simple, but not at all obvious, pattern controlling process. J.A.M., E.O.D. and M.S.C. thank CNPq/Brazil for financial support. E.A.L. acknowledges support from Secretaria de Estado de IDI Spain under project FIS2011-28820-C02-01.
Gizaw, S; van Arendonk, J A M; Valle-Zárate, A; Haile, A; Rischkowsky, B; Dessie, T; Mwai, A O
2014-10-01
A simulation study was conducted to optimize a cooperative village-based sheep breeding scheme for Menz sheep of Ethiopia. Genetic gains and profits were estimated under nine levels of farmers' participation and three scenarios of controlled breeding achieved in the breeding programme, as well as under three cooperative flock sizes, ewe to ram mating ratios and durations of ram use for breeding. Under fully controlled breeding, that is, when there is no gene flow between participating (P) and non-participating (NP) flocks, profits ranged from Birr 36.9 at 90% of participation to Birr 21.3 at 10% of participation. However, genetic progress was not affected adversely. When there was gene flow from the NP to P flocks, profits declined from Birr 28.6 to Birr -3.7 as participation declined from 90 to 10%. Under the two-way gene flow model (i.e. when P and NP flocks are herded mixed in communal grazing areas), NP flocks benefited from the genetic gain achieved in the P flocks, but the benefits declined sharply when participation declined beyond 60%. Our results indicate that a cooperative breeding group can be established with as low as 600 breeding ewes mated at a ratio of 45 ewes to one ram, and the rams being used for breeding for a period of two years. This study showed that farmer cooperation is crucial to effect genetic improvement under smallholder low-input sheep farming systems.
A test of an optimal stomatal conductance scheme within the CABLE Land Surface Model
NASA Astrophysics Data System (ADS)
De Kauwe, M. G.; Kala, J.; Lin, Y.-S.; Pitman, A. J.; Medlyn, B. E.; Duursma, R. A.; Abramowitz, G.; Wang, Y.-P.; Miralles, D. G.
2014-10-01
Stomatal conductance (gs) affects the fluxes of carbon, energy and water between the vegetated land surface and the atmosphere. We test an implementation of an optimal stomatal conductance model within the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model (LSM). In common with many LSMs, CABLE does not differentiate between gs model parameters in relation to plant functional type (PFT), but instead only in relation to photosynthetic pathway. We therefore constrained the key model parameter "g1" which represents a plants water use strategy by PFT based on a global synthesis of stomatal behaviour. As proof of concept, we also demonstrate that the g1 parameter can be estimated using two long-term average (1960-1990) bioclimatic variables: (i) temperature and (ii) an indirect estimate of annual plant water availability. The new stomatal models in conjunction with PFT parameterisations resulted in a large reduction in annual fluxes of transpiration (~ 30% compared to the standard CABLE simulations) across evergreen needleleaf, tundra and C4 grass regions. Differences in other regions of the globe were typically small. Model performance when compared to upscaled data products was not degraded, though the new stomatal conductance scheme did not noticeably change existing model-data biases. We conclude that optimisation theory can yield a simple and tractable approach to predicting stomatal conductance in LSMs.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Optimal temperature sampling with SPOTS to improve acoustic predictions
NASA Astrophysics Data System (ADS)
Rike, Erik R.; Delbalzo, Donald R.; Samuels, Brian C.
2003-10-01
The Modular Ocean Data Assimilation System (MODAS) uses optimal interpolation to assimilate data (e.g., XBTs), and to create temperature nowcasts and associated uncertainties. When XBTs are dropped in a uniform grid (during surveys) or in random patterns and spaced according to resources available their assimilation can lead to nowcast errors in complex, littoral regions, especially when only a few measurements are available. To mitigate, Sensor Placement for Optimal Temperature Sampling (SPOTS) [Rike and DelBalzo, Proc. IEEE Oceans (2003)] was developed to rapidly optimize placement of a few XBTs and to maximize MODAS accuracy. This work involves high-density, in situ data assimilation into MODAS to create a ground-truth temperature field from which a ground-truth transmission loss field was computed. Optimal XBT location sets were chosen by SPOTS, based on original MODAS uncertainties, and additional sets were chosen, based on subjective choices by an oceanographer. For each XBT set, a MODAS temperature nowcast and associated transmission losses were computed. This work discusses the relationship between temperature uncertainty, temperature error, and acoustic error for the objective SPOTS approach and the subjective oceanographer approach. The SPOTS approach allowed significantly more accurate acoustic calculations, especially when few XBTS were used. [Work sponsored by NAVAIR.
A General Investigation of Optimized Atmospheric Sample Duration
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.
NASA Astrophysics Data System (ADS)
Zhang, Xiaojia Shelly; de Sturler, Eric; Paulino, Glaucio H.
2017-10-01
We propose an efficient probabilistic method to solve a deterministic problem -- we present a randomized optimization approach that drastically reduces the enormous computational cost of optimizing designs under many load cases for both continuum and truss topology optimization. Practical structural designs by topology optimization typically involve many load cases, possibly hundreds or more. The optimal design minimizes a, possibly weighted, average of the compliance under each load case (or some other objective). This means that in each optimization step a large finite element problem must be solved for each load case, leading to an enormous computational effort. On the contrary, the proposed randomized optimization method with stochastic sampling requires the solution of only a few (e.g., 5 or 6) finite element problems (large linear systems) per optimization step. Based on simulated annealing, we introduce a damping scheme for the randomized approach. Through numerical examples in two and three dimensions, we demonstrate that the stochastic algorithm drastically reduces computational cost to obtain similar final topologies and results (e.g., compliance) compared with the standard algorithms. The results indicate that the damping scheme is effective and leads to rapid convergence of the proposed algorithm.
Continuous quality control of the blood sampling procedure using a structured observation scheme
Seemann, Tine Lindberg; Nybo, Mads
2016-01-01
Introduction An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. Materials and methods The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a follow-up study was conducted. All phlebotomists at the Department of Clinical Biochemistry and Pharmacology were observed. Three blood collections by each phlebotomist were observed at each session conducted at the phlebotomy ward and the hospital wards, respectively. Error frequencies were calculated for the phlebotomy ward and the hospital wards and for the two study phases. Results A total of 126 blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant differences were found between the two settings. After focus on the major aspects, the follow-up study showed significant improvement for all three items at both settings (P < 0.01, P < 0.01, and P = 0.01, respectively). Conclusion Continuous quality control of the phlebotomy procedure revealed a number of items not conducted in compliance with the local phlebotomy guideline. It supported significant improvements in the adherence to the recommended phlebotomy procedures and facilitated documentation of the phlebotomy quality. PMID:27812302
Jiang, Hai-ming; Xie, Kang; Wang, Ya-fei
2010-05-24
An effective pump scheme for the design of broadband and flat gain spectrum Raman fiber amplifiers is proposed. This novel approach uses a new shooting algorithm based on a modified Newton-Raphson method and a contraction factor to solve the two point boundary problems of Raman coupled equations more stably and efficiently. In combination with an improved particle swarm optimization method, which improves the efficiency and convergence rate by introducing a new parameter called velocity acceptability probability, this scheme optimizes the wavelengths and power levels for the pumps quickly and accurately. Several broadband Raman fiber amplifiers in C+L band with optimized pump parameters are designed. An amplifier of 4 pumps is designed to deliver an average on-off gain of 13.3 dB for a bandwidth of 80 nm, with about +/-0.5 dB in band maximum gain ripples.
Optimal sampling frequency in recording of resistance training exercises.
Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis
2017-03-01
The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Fixed-sample optimization using a probability density function
Barnett, R.N.; Sun, Zhiwei; Lester, W.A. Jr. |
1997-12-31
We consider the problem of optimizing parameters in a trial function that is to be used in fixed-node diffusion Monte Carlo calculations. We employ a trial function with a Boys-Handy correlation function and a one-particle basis set of high quality. By employing sample points picked from a positive definite distribution, parameters that determine the nodes of the trial function can be varied without introducing singularities into the optimization. For CH as a test system, we find that a trial function of high quality is obtained and that this trial function yields an improved fixed-node energy. This result sheds light on the important question of how to improve the nodal structure and, thereby, the accuracy of diffusion Monte Carlo.
Yessica Rico; Marie-Stephanie. Samain
2017-01-01
Investigating how genetic variation is distributed across the landscape is fundamental to inform forest conservation and restoration. Detecting spatial genetic discontinuities has value for defining management units, germplasm collection, and target sites for reforestation; however, inappropriate sampling schemes can misidentify patterns of genetic structure....
Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Optimization of a Sample Processing Protocol for Recovery of ...
Journal Article Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Improved scheme for Cross-track Infrared Sounder geolocation assessment and optimization
NASA Astrophysics Data System (ADS)
Wang, Likun; Zhang, Bin; Tremblay, Denis; Han, Yong
2017-01-01
An improved scheme for Cross-track Infrared Sounder (CrIS) geolocation assessment for all scan angles (from -48.5° to 48.5°) is developed in this study. The method uses spatially collocated radiance measurements from the Visible Infrared Imaging Radiometer Suite (VIIRS) image band I5 to evaluate the geolocation performance of the CrIS Sensor Data Records (SDR) by taking advantage of its high spatial resolution (375 m at nadir) and accurate geolocation. The basic idea is to perturb CrIS line-of-sight vectors along the in-track and cross-track directions to find a position where CrIS and VIIRS data matches more closely. The perturbation angles at this best matched position are then used to evaluate the CrIS geolocation accuracy. More importantly, the new method is capable of performing postlaunch on-orbit geometric calibration by optimizing mapping angle parameters based on the assessment results and thus can be further extended to the following CrIS sensors on new satellites. Finally, the proposed method is employed to evaluate the CrIS geolocation accuracy on current Suomi National Polar-orbiting Partnership satellite. The error characteristics are revealed along the scan positions in the in-track and cross-track directions. It is found that there are relatively large errors ( 4 km) in the cross-track direction close to the end of scan positions. With newly updated mapping angles, the geolocation accuracy is greatly improved for all scan positions (less than 0.3 km). This makes CrIS and VIIRS spatially align together and thus benefits the application that needs combination of CrIS and VIIRS measurements and products.
NASA Astrophysics Data System (ADS)
Sah, B. P.; Hämäläinen, J. M.; Sah, A. K.; Honji, K.; Foli, E. G.; Awudi, C.
2012-07-01
Accurate and reliable estimation of biomass in tropical forest has been a challenging task because a large proportion of forests are difficult to access or inaccessible. So, for effective implementation of REDD+ and fair benefit sharing, the proper designing of field plot sampling schemes plays a significant role in achieving robust biomass estimation. The existing forest inventory protocols using various field plot sampling schemes, including FAO's regular grid concept of sampling for land cover inventory at national level, are time and human resource intensive. Wall to wall LiDAR scanning is, however, a better approach to assess biomass with high precision and spatial resolution even though this approach suffers from high costs. Considering the above, in this study a sampling design based on a LiDAR strips sampling scheme has been devised for Ghanaian forests to support field plot sampling. Using Top-of-Atmosphere (TOA) reflectance value of satellite data, Land Use classification was carried out in accordance with IPCC definitions and the resulting classes were further stratified, incorporating existing GIS data of ecological zones in the study area. Employing this result, LiDAR sampling strips were allocated using systematic sampling techniques. The resulting LiDAR strips represented all forest categories, as well as other Land Use classes, with their distribution adequately representing the areal share of each category. In this way, out of at total area of 15,153km2 of the study area, LiDAR scanning was required for only 770 km2 (sampling intensity being 5.1%). We conclude that this systematic LiDAR sampling design is likely to adequately cover variation in above-ground biomass densities and serve as sufficient a-priori data, together with the Land Use classification produced, for designing efficient field plot sampling over the seven ecological zones.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Zhou, Haibo; Xu, Wangli; Zeng, Donglin; Cai, Jianwen
2014-01-01
Multi-phased designs and biased sampling designs are two of the well recognized approaches to enhance study efficiency. In this paper, we propose a new and cost-effective sampling design, the two-phase probability dependent sampling design (PDS), for studies with a continuous outcome. This design will enable investigators to make efficient use of resources by targeting more informative subjects for sampling. We develop a new semiparametric empirical likelihood inference method to take advantage of data obtained through a PDS design. Simulation study results indicate that the proposed sampling scheme, coupled with the proposed estimator, is more efficient and more powerful than the existing outcome dependent sampling design and the simple random sampling design with the same sample size. We illustrate the proposed method with a real data set from an environmental epidemiologic study.
NASA Astrophysics Data System (ADS)
Cai, Fu; Ming, Huiqing; Mi, Na; Xie, Yanbing; Zhang, Yushu; Li, Rongping
2017-04-01
As root water uptake (RWU) is an important link in the water and heat exchange between plants and ambient air, improving its parameterization is key to enhancing the performance of land surface model simulations. Although different types of RWU functions have been adopted in land surface models, there is no evidence as to which scheme most applicable to maize farmland ecosystems. Based on the 2007-09 data collected at the farmland ecosystem field station in Jinzhou, the RWU function in the Common Land Model (CoLM) was optimized with scheme options in light of factors determining whether roots absorb water from a certain soil layer ( W x ) and whether the baseline cumulative root efficiency required for maximum plant transpiration ( W c ) is reached. The sensibility of the parameters of the optimization scheme was investigated, and then the effects of the optimized RWU function on water and heat flux simulation were evaluated. The results indicate that the model simulation was not sensitive to W x but was significantly impacted by W c . With the original model, soil humidity was somewhat underestimated for precipitation-free days; soil temperature was simulated with obvious interannual and seasonal differences and remarkable underestimations for the maize late-growth stage; and sensible and latent heat fluxes were overestimated and underestimated, respectively, for years with relatively less precipitation, and both were simulated with high accuracy for years with relatively more precipitation. The optimized RWU process resulted in a significant improvement of CoLM's performance in simulating soil humidity, temperature, sensible heat, and latent heat, for dry years. In conclusion, the optimized RWU scheme available for the CoLM model is applicable to the simulation of water and heat flux for maize farmland ecosystems in arid areas.
Optimization of Evans blue quantitation in limited rat tissue samples
NASA Astrophysics Data System (ADS)
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-01
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimization of Evans blue quantitation in limited rat tissue samples.
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-10
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Sampling of soil moisture fields and related errors: implications to the optimal sampling design
NASA Astrophysics Data System (ADS)
Yoo, Chulsang
Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985-992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35-45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.
NSECT sinogram sampling optimization by normalized mutual information
NASA Astrophysics Data System (ADS)
Viana, Rodrigo S.; Galarreta-Valverde, Miguel A.; Mekkaoui, Choukri; Yoriyaz, Hélio; Jackowski, Marcel P.
2015-03-01
Neutron Stimulated Emission Computed Tomography (NSECT) is an emerging noninvasive imaging technique that measures the distribution of isotopes from biological tissue using fast-neutron inelastic scattering reaction. As a high-energy neutron beam illuminates the sample, the excited nuclei emit gamma rays whose energies are unique to the emitting nuclei. Tomographic images of each element in the spectrum can then be reconstructed to represent the spatial distribution of elements within the sample using a first generation tomographic scan. NSECT's high radiation dose deposition, however, requires a sampling strategy that can yield maximum image quality under a reasonable radiation dose. In this work, we introduce an NSECT sinogram sampling technique based on the Normalized Mutual Information (NMI) of the reconstructed images. By applying the Radon Transform on the ground-truth image obtained from a carbon-based synthetic phantom, different NSECT sinogram configurations were simulated and compared by using the NMI as a similarity measure. The proposed methodology was also applied on NSECT images acquired using MCNP5 Monte Carlo simulations of the same phantom to validate our strategy. Results show that NMI can be used to robustly predict the quality of the reconstructed NSECT images, leading to an optimal NSECT acquisition and a minimal absorbed dose by the patient.
Optimal probes for withdrawal of uncontaminated fluid samples
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2005-08-01
Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.
Optimizing Collocation of Instrument Measurements and Field Sampling Activities
NASA Astrophysics Data System (ADS)
Bromley, G. T.; Durden, D.; Ayres, E.; Barnett, D.; Krauss, R.; Luo, H.; Meier, C. L.; Metzger, S.
2015-12-01
The National Ecological Observatory Network (NEON) will provide data from automated instrument measurements and manual sampling activities. To reliably infer ecosystem driver-response relationships, two contradicting requirements need to be considered: Both types of observations should be representative of the same target area while minimally impacting each other. For this purpose, a simple model was created that determines an optimal area for collocating plot-based manual field sampling activities with respect to the automated measurements. The maximum and minimum distances of the collocation areas were determined from the instrument source area distribution function in combination with sampling densities and a threshold, respectively. Specifically, the maximum distance was taken as the extent from within which 90% of the value observed by an instrument is sourced. Sampling densities were then generated through virtually distributing activity-specific impact estimates across the instrument source area. The minimum distance was determined as the position closest to the instrument location where the sampling density falls below a threshold that ensures <10% impact on the source area informing the instrument measurements. At most sites, a 30m minimum distance ensured minimal impact of manual field sampling on instrument measurements, however, sensitive sites (e.g., tundra) required a larger minimum distance. To determine how the model responds to uncertainties in its inputs, a numerical sensitivity analysis was conducted based on multivariate error distributions that retain the covariance structure. In 90% of all cases, the model was shown to be robust against 10% (1 σ) deviations in its inputs, continuing to yield a minimum distance of 30 m. For the remaining 10% of all cases, preliminary results suggest a prominent dependence of the minimum distance on climate decomposition index, which we use here as a proxy for the sensitivity of an environment to disturbance.
NASA Astrophysics Data System (ADS)
Qiu, Yuzhuo
2013-04-01
The optimal weighting scheme and the role of coupling strength against load failures on symmetrically and asymmetrically coupled interdependent networks were investigated. The degree-based weighting scheme was extended to interdependent networks, with the flow dynamics dominated by global redistribution based on weighted betweenness centrality. Through contingency analysis of one-node removal, we demonstrated that there still exists an optimal weighting parameter on interdependent networks, but it might shift as compared to the case in isolated networks because of the break of symmetry. And it will be easier for the symmetrically and asymmetrically coupled interdependent networks to achieve robustness and better cost configuration against the one-node-removal-induced cascade of load failures when coupling strength was weaker. Our findings might have great generality for characterizing load-failure-induced cascading dynamics in real-world degree-based weighted interdependent networks.
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Gegg, Katharina; Becht, Michael
2013-04-01
Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size
St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.
2012-01-01
Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403
NASA Astrophysics Data System (ADS)
Kristoffersen, Anders; Goa, Pål Erik
2011-09-01
The physiological noise in 3D image acquisition is shown to depend strongly on the sampling scheme. Five sampling schemes are considered: Linear, Centric, Segmented, Random and Tuned. Tuned acquisition means that data acquisition at k-space positions k and - k are separated with a specific time interval. We model physiological noise as a periodic temporal oscillation with arbitrary spatial amplitude in the physical object and develop a general framework to describe how this is rendered in the reconstructed image. Reconstructed noise can be decomposed in one component that is in phase with the signal (parallel) and one that is 90° out of phase (orthogonal). Only the former has a significant influence on the magnitude of the signal. The study focuses on fMRI using 3D EPI. Each k-space plane is acquired in a single shot in a time much shorter than the period of the physiological noise. The above mentioned sampling schemes are applied in the slow k-space direction and noise propagates almost exclusively in this direction. The problem then, is effectively one-dimensional. Numerical simulations and analytical expressions are presented. 3D noise measurements and 2D measurements with high temporal resolution are conducted. The measurements are performed under breath-hold to isolate the effect of cardiac-induced pulsatile motion. We compare the time-course stability of the sampling schemes and the extent to which noise propagates from a localized source into other parts of the imaging volume. Tuned and Linear acquisitions perform better than Centric, Segmented and Random.
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is
Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties
NASA Astrophysics Data System (ADS)
Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens
2013-04-01
The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian
2017-07-01
We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.
NASA Astrophysics Data System (ADS)
Jeong, Younkoo; Jayanth, G. R.; Menq, Chia-Hsiang
2007-09-01
The control of tip-to-sample distance in atomic force microscopy (AFM) is achieved through controlling the vertical tip position of the AFM cantilever. In the vertical tip-position control, the required z motion is commanded by laser reading of the vertical tip position in real time and might contain high frequency components depending on the lateral scanning rate and topographical variations of the sample. This paper presents a dual-actuator tip-motion control scheme that enables the AFM tip to track abrupt topographical variations. In the dual-actuator scheme, an additional magnetic mode actuator is employed to achieve high bandwidth tip-motion control while the regular z scanner provides the necessary motion range. This added actuator serves to make the entire cantilever bandwidth available for tip positioning, and thus controls the tip-to-sample distance. A fast programmable electronics board was employed to realize the proposed dual-actuator control scheme, in which model cancellation algorithms were implemented to enlarge the bandwidth of the magnetic actuation and to compensate the lightly damped dynamics of the cantilever. Experiments were conducted to illustrate the capabilities of the proposed dual-actuator tip-motion control in terms of response speed and travel range. It was shown that while the bandwidth of the regular z scanner was merely a small fraction of the cantilever's bandwidth, the dual-actuator control scheme led to a tip-motion control system, the bandwidth of which was comparable to that of the cantilever, where the dynamics overdamped, and the motion range comparable to that of the z scanner.
Jeong, Younkoo; Jayanth, G R; Menq, Chia-Hsiang
2007-09-01
The control of tip-to-sample distance in atomic force microscopy (AFM) is achieved through controlling the vertical tip position of the AFM cantilever. In the vertical tip-position control, the required z motion is commanded by laser reading of the vertical tip position in real time and might contain high frequency components depending on the lateral scanning rate and topographical variations of the sample. This paper presents a dual-actuator tip-motion control scheme that enables the AFM tip to track abrupt topographical variations. In the dual-actuator scheme, an additional magnetic mode actuator is employed to achieve high bandwidth tip-motion control while the regular z scanner provides the necessary motion range. This added actuator serves to make the entire cantilever bandwidth available for tip positioning, and thus controls the tip-to-sample distance. A fast programmable electronics board was employed to realize the proposed dual-actuator control scheme, in which model cancellation algorithms were implemented to enlarge the bandwidth of the magnetic actuation and to compensate the lightly damped dynamics of the cantilever. Experiments were conducted to illustrate the capabilities of the proposed dual-actuator tip-motion control in terms of response speed and travel range. It was shown that while the bandwidth of the regular z scanner was merely a small fraction of the cantilever's bandwidth, the dual-actuator control scheme led to a tip-motion control system, the bandwidth of which was comparable to that of the cantilever, where the dynamics overdamped, and the motion range comparable to that of the z scanner.
Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong
2016-06-13
A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10^{-11} under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs.
Optimization for Peptide Sample Preparation for Urine Peptidomics
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.
2014-02-25
when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.
Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM
NASA Astrophysics Data System (ADS)
Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan
2017-09-01
In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... concentration measured in that sample. If the sample surface concentration is not equal to or lower than...
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
NASA Astrophysics Data System (ADS)
Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan
2015-04-01
Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).
Anacker, Tony; Hill, J Grant; Friedrich, Joachim
2016-04-21
Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.
Sampling Scheme and Compressed Sensing Applied to Solid-State NMR Spectroscopy
Lin, Eugene C.; Opella, Stanley J.
2013-01-01
We describe the incorporation of non-uniform sampling (NUS) compressed sensing (CS) into Oriented Sample (OS) Solid-state NMR for stationary aligned samples and Magic Angle Spinning (MAS) Solid-state NMR for unoriented ‘powder’ samples Both simulated and experimental results indicate that 25% to 33% of a full linearly sampled data set is required to reconstruct two-and three-dimensional solid-state NMR spectra with high fidelity. A modest increase in signal-to-noise ratio is accompanies the reconstruction. PMID:24140622
Steerable antenna with circular-polarization. 2. Selection of optimal scheme
Abranin, E.P.; Bazelyan, L.L.; Brazhenko, A.I.
1987-11-01
In order to study the sporadic radio emission from the sun a polarimeter operating at 25 MGz was developed and constructed. It employs the steerable antenna array of the URAN-1 radio telescope. The results of numerical calculations of compensation schemes, intended for emission (reception) of circularly polarized waves in an arbitrary direction with the help of crossed dipoles, are presented.
Optimized compact-difference-based finite-volume schemes for linear wave phenomena
Gaitonde, D.; Shang, J.S.
1997-12-01
This paper discusses a numerical method to analyze linear wave propagation phenomena with emphasis on electromagnetic in the time-domain. The numerical methods is based on a compact-difference-based finite-volume method at higher-orders. This scheme is evaluated using a classical fourth-order Runge-Kutta technique.
Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael
2017-08-08
Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.
How old is this bird? The age distribution under some phase sampling schemes.
Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter
2017-04-03
In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
NASA Astrophysics Data System (ADS)
Tatevosyan, A. A.; Tatevosyan, A. S.
2017-08-01
The paper is to describe the method for studying the rheological characteristics of elastomers using a multi-circuit electrical scheme of substitution, the synthesis of which is performed on the basis of experimental data obtained during the mechanical relaxation of loaded test samples at a fixed value of the relative deformation. In analyzing the fast and slow stages of the stress relaxation process in elastomer test specimens with significantly different viscoelastic properties, it is established that the number of relaxation mechanisms in the decomposition of the time dependence into exponentials does not exceed 6 (six).
Wu, Zhijun
1996-11-01
This paper discusses a generalization of the function transformation scheme for global energy minimization applied to the molecular conformation problem. A mathematical theory for the method as a special continuation approach to global optimization is established. We show that the method can transform a nonlinear objective function into a class of gradually deformed, but {open_quote}smoother{close_quote} or {open_quotes}easier{close_quote} functions. An optimization procedure can then be applied to the new functions successively, to trace their solutions back to the original function. Two types of transformation are defined: isotropic and anisotropic. We show that both transformations can be applied to a large class of nonlinear partially separable functions including energy functions for molecular conformation. Methods to compute the transformation for these functions are given.
Warwicker, Jim
2004-10-01
Ionizable groups play critical roles in biological processes. Computation of pK(a)s is complicated by model approximations and multiple conformations. Calculated and experimental pK(a)s are compared for relatively inflexible active-site side chains, to develop an empirical model for hydration entropy changes upon charge burial. The modification is found to be generally small, but large for cysteine, consistent with small molecule ionization data and with partial charge distributions in ionized and neutral forms. The hydration model predicts significant entropic contributions for ionizable residue burial, demonstrated for components in the pyruvate dehydrogenase complex. Conformational relaxation in a pH-titration is estimated with a mean-field assessment of maximal side chain solvent accessibility. All ionizable residues interact within a low protein dielectric finite difference (FD) scheme, and more flexible groups also access water-mediated Debye-Hückel (DH) interactions. The DH method tends to match overall pH-dependent stability, while FD can be more accurate for active-site groups. Tolerance for side chain rotamer packing is varied, defining access to DH interactions, and the best fit with experimental pK(a)s obtained. The new (FD/DH) method provides a fast computational framework for making the distinction between buried and solvent-accessible groups that has been qualitatively apparent from previous work, and pK(a) calculations are significantly improved for a mixed set of ionizable residues. Its effectiveness is also demonstrated with computation of the pH-dependence of electrostatic energy, recovering favorable contributions to folded state stability and, in relation to structural genomics, with substantial improvement (reduction of false positives) in active-site identification by electrostatic strain.
Ricci, M; Sciarrino, F; Sias, C; De Martini, F
2004-01-30
By a significant modification of the standard protocol of quantum state teleportation, two processes "forbidden" by quantum mechanics in their exact form, the universal NOT gate and the universal optimal quantum cloning machine, have been implemented contextually and optimally by a fully linear method. In particular, the first experimental demonstration of the tele-UNOT gate, a novel quantum information protocol, has been reported. The experimental results are found in full agreement with theory.
Kweka, Eliningaya J; Mahande, Aneth M
2009-01-01
Background Adult malaria vector sampling is the most important parameter for setting up an intervention and understanding disease dynamics in malaria endemic areas. The intervention will ideally be species-specific according to sampling output. It was the objective of this study to evaluate four sampling techniques, namely human landing catch, pit shelter, indoor resting collection and odour-baited entry trap. Methodology These four sampling methods were evaluated simultaneously for thirty days during October 2008, a season of low mosquitoes density and malaria transmission. These trapping methods were performed in one village for maximizing homogeneity in mosquito density. The cattle and man used in odour-baited entry trap were rotated between the chambers to avoid bias. Results A total of 3,074 mosquitoes were collected. Among these 1,780 (57.9%) were Anopheles arabiensis and 1,294 (42.1%) were Culex quinquefasciatus. Each trap sampled different number of mosquitoes, Indoor resting collection collected 335 (10.9%), Odour-baited entry trap-cow 1,404 (45.7%), Odour-baited entry trap-human 378 (12.3%), Pit shelter 562 (18.3%) and HLC 395 (12.8%). General linear model univariate analysis method was used, position of the trapping method had no effect on mosquito density catch (DF = 4, F = 35.596, P = 0.78). Days variation had no effect on the collected density too (DF = 29, F = 4.789, P = 0.09). The sampling techniques had significant impact on the caught mosquito densities (DF = 4, F = 34.636, P < 0.0001). The Wilcoxon pair-wise comparison between mosquitoes collected in human landing catch and pit shelter was significant (Z = -3.849, P < 0.0001), human landing catch versus Indoor resting collection was not significant (Z = -0.502, P = 0.615), human landing catch versus odour-baited entry trap-man was significant (Z = -2.687, P = 0.007), human landing catch versus odour-baited entry trap-cow was significant (Z = -3.127, P = 0.002). Conclusion Odour-baited traps with
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
Forward flux sampling-type schemes for simulating rare events: efficiency analysis.
Allen, Rosalind J; Frenkel, Daan; ten Wolde, Pieter Rein
2006-05-21
We analyze the efficiency of several simulation methods which we have recently proposed for calculating rate constants for rare events in stochastic dynamical systems in or out of equilibrium. We derive analytical expressions for the computational cost of using these methods and for the statistical error in the final estimate of the rate constant for a given computational cost. These expressions can be used to determine which method to use for a given problem, to optimize the choice of parameters, and to evaluate the significance of the results obtained. We apply the expressions to the two-dimensional nonequilibrium rare event problem proposed by Maier and Stein [Phys. Rev. E 48, 931 (1993)]. For this problem, our analysis gives accurate quantitative predictions for the computational efficiency of the three methods.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
Optimal complexity scalable H.264/AVC video decoding scheme for portable multimedia devices
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Park, Younghyeon; Jeon, Byeungwoo
2013-07-01
Limited computing resources in portable multimedia devices are an obstacle in real-time video decoding of high resolution and/or high quality video contents. Ordinary H.264/AVC video decoders cannot decode video contents that exceed the limits set by their processing resources. However, in many real applications especially on portable devices, a simplified decoding with some acceptable degradation may be desirable instead of just refusing to decode such contents. For this purpose, a complexity-scalable H.264/AVC video decoding scheme is investigated in this paper. First, several simplified methods of decoding tools that have different characteristics are investigated to reduce decoding complexity and consequential degradation of reconstructed video. Then a complexity scalable H.264/AVC decoding scheme is designed by selectively combining effective simplified methods to achieve the minimum degradation. Experimental results with the H.264/AVC main profile bitstream show that its decoding complexity can be scalably controlled, and reduced by up to 44% without subjective quality loss.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
Metcalfe, H; Milne, A E; Webster, R; Lark, R M; Murdoch, A J; Storkey, J
2016-02-01
Weeds tend to aggregate in patches within fields, and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at various scales, the strength of the relations between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We developed a general method that uses novel within-field nested sampling and residual maximum-likelihood (reml) estimation to explore scale-dependent relations between weeds and soil properties. We validated the method using a case study of Alopecurus myosuroides in winter wheat. Using reml, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales, we optimised the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Optimal Scheme for Search State Space and Scheduling on Multiprocessor Systems
NASA Astrophysics Data System (ADS)
Youness, Hassan A.; Sakanushi, Keishi; Takeuchi, Yoshinori; Salem, Ashraf; Wahdan, Abdel-Moneim; Imai, Masaharu
A scheduling algorithm aims to minimize the overall execution time of the program by properly allocating and arranging the execution order of the tasks on the core processors such that the precedence constraints among the tasks are preserved. In this paper, we present a new scheduling algorithm by using geometry analysis of the Task Precedence Graph (TPG) based on A* search technique and uses a computationally efficient cost function for guiding the search with reduced complexity and pruning techniques to produce an optimal solution for the allocation/scheduling problem of a parallel application to parallel and multiprocessor architecture. The main goal of this work is to significantly reduce the search space and achieve the optimality or near optimal solution. We implemented the algorithm on general task graph problems that are processed on most of related search work and obtain the optimal scheduling with a small number of states. The proposed algorithm reduced the exhaustive search by at least 50% of search space. The viability and potential of the proposed algorithm is demonstrated by an illustrative example.
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Optimized linear prediction for radial sampled multidimensional NMR experiments
NASA Astrophysics Data System (ADS)
Gledhill, John M.; Kasinath, Vignesh; Wand, A. Joshua
2011-09-01
Radial sampling in multidimensional NMR experiments offers greatly decreased acquisition times while also providing an avenue for increased sensitivity. Digital resolution remains a concern and depends strongly upon the extent of sampling of individual radial angles. Truncated time domain data leads to spurious peaks (artifacts) upon FT and 2D FT. Linear prediction is commonly employed to improve resolution in Cartesian sampled NMR experiments. Here, we adapt the linear prediction method to radial sampling. Significantly more accurate estimates of linear prediction coefficients are obtained by combining quadrature frequency components from the multiple angle spectra. This approach results in significant improvement in both resolution and removal of spurious peaks as compared to traditional linear prediction methods applied to radial sampled data. The 'averaging linear prediction' (ALP) method is demonstrated as a general tool for resolution improvement in multidimensional radial sampled experiments.
Su, Cheng-Kuan; Tseng, Po-Jen; Lin, Meng-Han; Chiu, Hsien-Ting; del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang
2015-07-10
The extravasation of administered nano-drug carriers is a critical process for determining their distributions in target and non-target organs, as well as their pharmaceutical efficacies and side effects. To evaluate the extravasation behavior of gold nanoparticles (AuNPs), currently the most popular drug delivery system, in a mouse tumor model, in this study we employed push-pull perfusion (PPP) as a means of continuously sampling tumor extracellular AuNPs. To facilitate quantification of the extravasated AuNPs through inductively coupled plasma mass spectrometry, we also developed a novel online open-tubular fractionation scheme to allow interference-free determination of the sampled extracellular AuNPs from the coexisting biological matrix. After optimizing the flow-through volume and flow rate of this proposed fractionation scheme, we found that (i) the system's temporal resolution was 7.5h(-1), (ii) the stability presented by the coefficient of variation was less than 10% (6-h continuous measurement), and (iii) the detection limits for the administered AuNPs were in the range 0.057-0.068μgL(-1). Following an intravenous dosage of AuNPs (0.3mgkg(-1) body weight), in vivo acquired profiles indicated that the pegylated AuNPs (PEG-AuNPs) had greater tendency toward extravasating into the tumor extracellular space. We also observed that the accumulation of nanoparticles in the whole tumor tissues was higher for PEG-AuNPs than for non-pegylated ones. Overall, pegylation appears to promote the extravasation and accumulation of AuNPs for nano-drug delivery applications. Copyright © 2015 Elsevier B.V. All rights reserved.
A combinatorial optimization scheme for parameter structure identification in ground water modeling.
Tsai, Frank T C; Sun, Ne-Zheng; Yeh, William W G
2003-01-01
This research develops a methodology for parameter structure identification in ground water modeling. For a given set of observations, parameter structure identification seeks to identify the parameter dimension, its corresponding parameter pattern and values. Voronoi tessellation is used to parameterize the unknown distributed parameter into a number of zones. Accordingly, the parameter structure identification problem is equivalent to finding the number and locations as well as the values of the basis points associated with the Voronoi tessellation. A genetic algorithm (GA) is allied with a grid search method and a quasi-Newton algorithm to solve the inverse problem. GA is first used to search for the near-optimal parameter pattern and values. Next, a grid search method and a quasi-Newton algorithm iteratively improve the GA's estimates. Sensitivities of state variables to parameters are calculated by the sensitivity-equation method. MODFLOW and MT3DMS are employed to solve the coupled flow and transport model as well as the derived sensitivity equations. The optimal parameter dimension is determined using criteria based on parameter uncertainty and parameter structure discrimination. Numerical experiments are conducted to demonstrate the proposed methodology, in which the true transmissivity field is characterized by either a continuous distribution or a distribution that can be characterized by zones. We conclude that the optimized transmissivity zones capture the trend and distribution of the true transmissivity field.
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-10-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM DW-MRI USING OPTIMAL SAMPLING LATTICES
Ye, Wenxing; Entezari, Alireza; Vemuri, Baba C.
2010-01-01
This paper exploits the power of optimal sampling lattices in tomography based reconstruction of the diffusion propagator in diffusion weighted magnetic resonance imaging (DWMRI). Optimal sampling leads to increased accuracy of the tomographic reconstruction approach introduced by Pickalov and Basser [1]. Alternatively, the optimal sampling geometry allows for further reducing the number of samples while maintaining the accuracy of reconstruction of the diffusion propagator. The optimality of the proposed sampling geometry comes from the information theoretic advantages of sphere packing lattices in sampling multidimensional signals. These advantages are in addition to those accrued from the use of the tomographic principle used here for reconstruction. We present comparative results of reconstructions of the diffusion propagator using the Cartesian and the optimal sampling geometry for synthetic and real data sets. PMID:20596298
NASA Astrophysics Data System (ADS)
Gschwend, Daniel A.; Kuntz, Irwin D.
1996-04-01
Strategies for computational association of molecular components entail a compromise between configurational exploration and accurate evaluation. Following the work of Meng et al. [Proteins, 17 (1993) 266], we investigate issues related to sampling and optimization in molecular docking within the context of the DOCK program. An extensive analysis of diverse sampling conditions for six receptor-ligand complexes has enabled us to evaluate the tractability and utility of on-the-fly force-field score minimization, as well as the method for configurational exploration. We find that the sampling scheme in DOCK is extremely robust in its ability to produce configurations near to those experimentally observed. Furthermore, despite the heavy resource demands of refinement, the incorporation of a rigid-body, grid-based simplex minimizer directly into the docking process results in a docking strategy that is more efficient at retrieving experimentally observed configurations than docking in the absence of optimization. We investigate the capacity for further performance enhancement by implementing a degeneracy checking protocol aimed at circumventing redundant optimizations of geometrically similar orientations. Finally, we present methods that assist in the selection of sampling levels appropriate to desired result quality and available computational resources.
Pennacchio, Francesco; Vanacore, Giovanni M.; Mancini, Giulia F.; Oppermann, Malte; Jayaraman, Rajeswari; Musumeci, Pietro; Baum, Peter; Carbone, Fabrizio
2017-01-01
Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect). Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons. PMID:28713841
Optimal satellite sampling to resolve global-scale dynamics in the I-T system
NASA Astrophysics Data System (ADS)
Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.
2016-12-01
The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.
Optimization of insulin pump therapy based on high order run-to-run control scheme.
Tuo, Jianyong; Sun, Huiling; Shen, Dong; Wang, Hui; Wang, Youqing
2015-07-01
Continuous subcutaneous insulin infusion (CSII) pump is widely considered a convenience and promising way for type 1 diabetes mellitus (T1DM) subjects, who need exogenous insulin infusion. In the standard insulin pump therapy, there are two modes for insulin infusion: basal and bolus insulin. The basal-bolus therapy should be individualized and optimized in order to keep one subject's blood glucose (BG) level within the normal range; however, the optimization procedure is troublesome and it perturb the patients a lot. Therefore, an automatic adjustment method is needed to reduce the burden of the patients, and run-to-run (R2R) control algorithm can be used to handle this significant task. In this study, two kinds of high order R2R control methods are presented to adjust the basal and bolus insulin simultaneously. For clarity, a second order R2R control algorithm is first derived and studied. Furthermore, considering the differences between weekdays and weekends, a seventh order R2R control algorithm is also proposed and tested. In order to simulate real situation, the proposed method has been tested with uncertainties on measurement noise, drifts, meal size, meal time and snack. The proposed method can converge even when there are ±60 min random variations in meal timing or ±50% random variations in meal size. According to the robustness analysis, one can see that the proposed high order R2R has excellent robustness and could be a promising candidate to optimize insulin pump therapy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Optimal weighting scheme for suppressing cascades and traffic congestion in complex networks.
Yang, Rui; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong
2009-02-01
This paper is motivated by the following two related problems in complex networks: (i) control of cascading failures and (ii) mitigation of traffic congestion. Both problems are of significant recent interest as they address, respectively, the security of and efficient information transmission on complex networks. Taking into account typical features of load distribution and weights in real-world networks, we have discovered an optimal solution to both problems. In particular, we shall provide numerical evidence and theoretical analysis that, by choosing a proper weighting parameter, a maximum level of robustness against cascades and traffic congestion can be achieved, which practically rids the network of occurrences of the catastrophic dynamics.
Finding an Optimal Thermo-Mechanical Processing Scheme for a Gum-Type Ti-Nb-Zr-Fe-O Alloy
NASA Astrophysics Data System (ADS)
Nocivin, Anna; Cojocaru, Vasile Danut; Raducanu, Doina; Cinca, Ion; Angelescu, Maria Lucia; Dan, Ioan; Serban, Nicolae; Cojocaru, Mirela
2017-08-01
A gum-type alloy was subjected to a thermo-mechanical processing scheme to establish a suitable process for obtaining superior structural and behavioural characteristics. Three processes were proposed: a homogenization treatment, a cold-rolling process and a solution treatment with three heating temperatures: 1073 K (800 °C), 1173 K (900 °C) and 1273 K (1000 °C). Results of all three proposed processes were analyzed using x-ray diffraction and scanning electron microscopy imaging, to establish and compare the structural modifications. The behavioural status was completed with micro-hardness and tensile strength tests. The optimal results were obtained for solution treatment at 1073 K.
Inference for optimal dynamic treatment regimes using an adaptive m-out-of-n bootstrap scheme.
Chakraborty, Bibhas; Laber, Eric B; Zhao, Yingqi
2013-09-01
A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. © 2013, The International Biometric Society.
Inference for Optimal Dynamic Treatment Regimes using an Adaptive m-out-of-n Bootstrap Scheme
Chakraborty, Bibhas; Laber, Eric B.; Zhao, Yingqi
2013-01-01
Summary A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much more simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. PMID:23845276
Comparison of Optimized Soft-Tissue Suppression Schemes for Ultra-short Echo Time (UTE) MRI
Li, Cheng; Magland, Jeremy F.; Rad, Hamidreza Saligheh; Song, Hee Kwon; Wehrli, Felix W.
2011-01-01
Ultra-short echo time (UTE) imaging with soft-tissue suppression reveals short-T2 components (typically hundreds of microseconds to milliseconds) ordinarily not captured or obscured by long-T2 tissue signals on the order of tens of milliseconds or longer. The technique therefore enables visualization and quantification of short-T2 proton signals such as those in highly collagenated connective tissues. This work compares the performance of the three most commonly used long-T2 suppression UTE sequences, i.e. echo subtraction (dual-echo UTE), saturation via dual-band saturation pulses (dual-band UTE), and inversion by adiabatic inversion pulses (IR-UTE) at 3T, via Bloch simulations and experimentally in vivo in the lower extremities of test subjects. For unbiased performance comparison, the acquisition parameters are optimized individually for each sequence to maximize short-T2 SNR and CNR between short- and long-T2 components. Results show excellent short-T2 contrast is achieved with these optimized sequences. A combination of dual-band UTE with dual-echo UTE provides good short-T2 SNR and CNR with less sensitivity to B1 homogeneity. IR-UTE has the lowest short-T2 SNR efficiency but provides highly uniform short-T2 contrast and is well suited for imaging short-T2 species with relatively short T1 such as bone water. PMID:22161636
Optimization of sample size in controlled experiments: the CLAST rule.
Botella, Juan; Ximénez, Carmen; Revuelta, Javier; Suero, Manuel
2006-02-01
Sequential rules are explored in the context of null hypothesis significance testing. Several studies have demonstrated that the fixed-sample stopping rule, in which the sample size used by researchers is determined in advance, is less practical and less efficient than sequential stopping rules. It is proposed that a sequential stopping rule called CLAST (composite limited adaptive sequential test) is a superior variant of COAST (composite open adaptive sequential test), a sequential rule proposed by Frick (1998). Simulation studies are conducted to test the efficiency of the proposed rule in terms of sample size and power. Two statistical tests are used: the one-tailed t test of mean differences with two matched samples, and the chi-square independence test for twofold contingency tables. The results show that the CLAST rule is more efficient than the COAST rule and reflects more realistically the practice of experimental psychology researchers.
Washington, Chad W; Ju, Tao; Zipfel, Gregory J; Dacey, Ralph G
2014-03-01
Changing landscapes in neurosurgical training and increasing use of endovascular therapy have led to decreasing exposure in open cerebrovascular neurosurgery. To ensure the effective transition of medical students into competent practitioners, new training paradigms must be developed. Using principles of pattern recognition, we created a classification scheme for middle cerebral artery (MCA) bifurcation aneurysms that allows their categorization into a small number of shape pattern groups. Angiographic data from patients with MCA aneurysms between 1995 and 2012 were used to construct 3-dimensional models. Models were then analyzed and compared objectively by assessing the relationship between the aneurysm sac, parent vessel, and branch vessels. Aneurysms were then grouped on the basis of the similarity of their shape patterns in such a way that the in-class similarities were maximized while the total number of categories was minimized. For each category, a proposed clip strategy was developed. From the analysis of 61 MCA bifurcation aneurysms, 4 shape pattern categories were created that allowed the classification of 56 aneurysms (91.8%). The number of aneurysms allotted to each shape cluster was 10 (16.4%) in category 1, 24 (39.3%) in category 2, 7 (11.5%) in category 3, and 15 (24.6%) in category 4. This study demonstrates that through the use of anatomic visual cues, MCA bifurcation aneurysms can be grouped into a small number of shape patterns with an associated clip solution. Implementing these principles within current neurosurgery training paradigms can provide a tool that allows more efficient transition from novice to cerebrovascular expert.
NASA Astrophysics Data System (ADS)
da Jornada, Felipe H.; Qiu, Diana Y.; Louie, Steven G.
2017-01-01
First-principles calculations based on many-electron perturbation theory methods, such as the ab initio G W and G W plus Bethe-Salpeter equation (G W -BSE) approach, are reliable ways to predict quasiparticle and optical properties of materials, respectively. However, these methods involve more care in treating the electron-electron interaction and are considerably more computationally demanding when applied to systems with reduced dimensionality, since the electronic confinement leads to a slower convergence of sums over the Brillouin zone due to a much more complicated screening environment that manifests in the "head" and "neck" elements of the dielectric matrix. Here we present two schemes to sample the Brillouin zone for G W and G W -BSE calculations: the nonuniform neck subsampling method and the clustered sampling interpolation method, which can respectively be used for a family of single-particle problems, such as G W calculations, and for problems involving the scattering of two-particle states, such as when solving the BSE. We tested these methods on several few-layer semiconductors and graphene and show that they perform a much more efficient sampling of the Brillouin zone and yield two to three orders of magnitude reduction in the computer time. These two methods can be readily incorporated into several ab initio packages that compute electronic and optical properties through the G W and G W -BSE approaches.
Bjurlin, Marc A.; Carter, H. Ballentine; Schellhammer, Paul; Cookson, Michael S.; Gomella, Leonard G.; Troyer, Dean; Wheeler, Thomas M.; Schlossberg, Steven; Penson, David F.; Taneja, Samir S.
2014-01-01
Purpose An optimal prostate biopsy in clinical practice is based on a balance between adequate detection of clinically significant prostate cancers (sensitivity), assuredness regarding the accuracy of negative sampling (negative predictive value [NPV]), limited detection of clinically insignificant cancers, and good concordance with whole-gland surgical pathology results to allow accurate risk stratification and disease localization for treatment selection. Inherent within this optimization is variation of the core number, location, labeling, and processing for pathologic evaluation. To date, there is no consensus in this regard. The purpose of this review is 3-fold: 1. To define the optimal number and location of biopsy cores during primary prostate biopsy among men with suspected prostate cancer, 2. To define the optimal method of labeling prostate biopsy cores for pathologic processing that will provide relevant and necessary clinical information for all potential clinical scenarios, and 3. To determine the maximal number of prostate biopsy cores allowable within a specimen jar that would not preclude accurate histologic evaluation of the tissue. Materials and Methods A bibliographic search covering the period up to July, 2012 was conducted using PubMed®. This search yielded approximately 550 articles. Articles were reviewed and categorized based on which of the three objectives of this review was addressed. Data was extracted, analyzed, and summarized. Recommendations based on this literature review and our clinical experience is provided. Results The use of 10–12-core extended-sampling protocols increases cancer detection rates (CDRs) compared to traditional sextant sampling methods and reduces the likelihood that patients will require a repeat biopsy by increasing NPV, ultimately allowing more accurate risk stratification without increasing the likelihood of detecting insignificant cancers. As the number of cores increases above 12 cores, the increase in
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations.
A neighboring optimal feedback control scheme for systems using discontinuous control.
NASA Technical Reports Server (NTRS)
Foerster, R. E.; Flugge-Lotz, I.
1971-01-01
The calculation and implementation of the neighboring optimal feedback control law for multiinput, nonlinear dynamical systems, using discontinuous control, is discussed. An initialization procedure is described which removes the requirement that the neighboring initial state be in the neighborhood of the nominal initial state. This procedure is a bootstrap technique for determining the most appropriate control-law gain for the neighboring initial state. The mechanization of the neighboring control law described is closed loop in that the concept of time-to-go is utilized in the determination of the control-law gains appropriate for each neighboring state. The gains are chosen such that the time-to-go until the next predicted switch time or predicted final time is the same for both the neighboring and nominal trajectories. The procedure described is utilized to solve the minimum-time satellite attitude-acquisition problem.
A ground-state-directed optimization scheme for the Kohn-Sham energy.
Høst, Stinne; Jansík, Branislav; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Helgaker, Trygve
2008-09-21
Kohn-Sham density-functional calculations are used in many branches of science to obtain information about the electronic structure of molecular systems and materials. Unfortunately, the traditional method for optimizing the Kohn-Sham energy suffers from fundamental problems that may lead to divergence or, even worse, convergence to an energy saddle point rather than to the ground-state minimum--in particular, for the larger and more complicated electronic systems that are often studied by Kohn-Sham theory nowadays. We here present a novel method for Kohn-Sham energy minimization that does not suffer from the flaws of the conventional approach, combining reliability and efficiency with linear complexity. In particular, the proposed method converges by design to a minimum, avoiding the sometimes spurious solutions of the traditional method and bypassing the need to examine the structure of the provided solution.
Roy, R; Sevick-Muraca, E
1999-05-10
The development of non-invasive, biomedical optical imaging from time-dependent measurements of near-infrared (NIR) light propagation in tissues depends upon two crucial advances: (i) the instrumental tools to enable photon "time-of-flight" measurement within rapid and clinically realistic times, and (ii) the computational tools enabling the reconstruction of interior tissue optical property maps from exterior measurements of photon "time-of-flight" or photon migration. In this contribution, the image reconstruction algorithm is formulated as an optimization problem in which an interior map of tissue optical properties of absorption and fluorescence lifetime is reconstructed from synthetically generated exterior measurements of frequency-domain photon migration (FDPM). The inverse solution is accomplished using a truncated Newtons method with trust region to match synthetic fluorescence FDPM measurements with that predicted by the finite element prediction. The computational overhead and error associated with computing the gradient numerically is minimized upon using modified techniques of reverse automatic differentiation.
Optimal block sampling of routine, non-tumorous gallbladders.
Wong, Newton Acs
2017-03-08
Gallbladders are common specimens in routine histopathological practice and there are, at least in the United Kingdom and Australia, national guidance on how to sample gallbladders without macroscopically-evident, focal lesions/tumours (hereafter referred to as non-tumorous gallbladders).(1) Nonetheless, this author has seen considerable variation in the numbers of blocks used and the parts of the gallbladder sampled, even within one histopathology department. The recently re-issued 'Tissue pathways for gastrointestinal and pancreatobiliary pathology' from the Royal College of Pathologists (RCPath), first recommends sampling of the cystic duct margin and "at least one section each of neck, body and any focal lesion".(1) This recommendation is referenced by a textbook chapter which itself proposes that "cross-sections of the gallbladder fundus and lateral wall should be submitted, along with the sections from the neck of the gallbladder and cystic duct, including its margin".(2) This article is protected by copyright. All rights reserved.
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
NASA Astrophysics Data System (ADS)
Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.
2016-10-01
A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.
A test of an optimal stomatal conductance scheme within the CABLE land surface model
NASA Astrophysics Data System (ADS)
De Kauwe, M. G.; Kala, J.; Lin, Y.-S.; Pitman, A. J.; Medlyn, B. E.; Duursma, R. A.; Abramowitz, G.; Wang, Y.-P.; Miralles, D. G.
2015-02-01
Stomatal conductance (gs) affects the fluxes of carbon, energy and water between the vegetated land surface and the atmosphere. We test an implementation of an optimal stomatal conductance model within the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model (LSM). In common with many LSMs, CABLE does not differentiate between gs model parameters in relation to plant functional type (PFT), but instead only in relation to photosynthetic pathway. We constrained the key model parameter "g1", which represents plant water use strategy, by PFT, based on a global synthesis of stomatal behaviour. As proof of concept, we also demonstrate that the g1 parameter can be estimated using two long-term average (1960-1990) bioclimatic variables: (i) temperature and (ii) an indirect estimate of annual plant water availability. The new stomatal model, in conjunction with PFT parameterisations, resulted in a large reduction in annual fluxes of transpiration (~ 30% compared to the standard CABLE simulations) across evergreen needleleaf, tundra and C4 grass regions. Differences in other regions of the globe were typically small. Model performance against upscaled data products was not degraded, but did not noticeably reduce existing model-data biases. We identified assumptions relating to the coupling of the vegetation to the atmosphere and the parameterisation of the minimum stomatal conductance as areas requiring further investigation in both CABLE and potentially other LSMs. We conclude that optimisation theory can yield a simple and tractable approach to predicting stomatal conductance in LSMs.
Zhang, Xian-Ming; Han, Qing-Long
2016-12-01
This paper is concerned with decentralized event-triggered dissipative control for systems with the entries of the system outputs having different physical properties. Depending on these different physical properties, the entries of the system outputs are grouped into multiple nodes. A number of sensors are used to sample the signals from different nodes. A decentralized event-triggering scheme is introduced to select those necessary sampled-data packets to be transmitted so that communication resources can be saved significantly while preserving the prescribed closed-loop performance. First, in order to organize the decentralized data packets transmitted from the sensor nodes, a data packet processor (DPP) is used to generate a new signal to be held by the zero-order-hold once the signal stored by the DPP is updated at some time instant. Second, under the mechanism of the DPP, the resulting closed-loop system is modeled as a linear system with an interval time-varying delay. A sufficient condition is derived such that the closed-loop system is asymptotically stable and strictly (Q0,S0,R0) -dissipative, where Q0,S0 , and R0 are real matrices of appropriate dimensions with Q0 and R0 symmetric. Third, suitable output-based controllers can be designed based on solutions to a set of a linear matrix inequality. Finally, two examples are given to demonstrate the effectiveness of the proposed method.
Optimization conditions of samples saponification for tocopherol analysis.
Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto
2014-09-01
A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.
Determination and optimization of spatial samples for distributed measurements.
Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong
2010-10-01
There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Optimizing analog-to-digital converters for sampling extracellular potentials.
Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan
2012-01-01
In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center
Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz
2014-01-01
Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636
Perihilar Cholangiocarcinoma: Number of Nodes Examined and Optimal Lymph Node Prognostic Scheme
Bagante, Fabio; Tran, Thuy; Spolverato, Gaya; Ruzzenente, Andrea; Buttner, Stefan; Ethun, Cecilia G; Koerkamp, Bas Groot; Conci, Simone; Idrees, Kamran; Isom, Chelsea A; Fields, Ryan C; Krasnick, Bradley; Weber, Sharon M; Salem, Ahmed; Martin, Robert CG; Scoggins, Charles; Shen, Perry; Mogal, Harveshp D; Schmidt, Carl; Beal, Eliza; Hatzaras, Ioannis; Vitiello, Gerardo; IJzermans, Jan NM; Maithel, Shishir K; Poultsides, George; Guglielmi, Alfredo; Pawlik, Timothy M
2017-01-01
BACKGROUND The role of routine lymphadenectomy for perihilar cholangiocarcinoma is still controversial and no study has defined the minimum number of lymph nodes examined (TNLE). We sought to assess the prognostic performance of American Joint Committee on Cancer/Union Internationale Contre le Cancer (7th edition) N stage, lymph node ratio, and log odds (LODDS; logarithm of the ratio between metastatic and nonmetastatic nodes) in patients with perihilar cholangiocarcinoma and identify the optimal TNLE to accurately stage patients. METHODS A multi-institutional database was queried to identify 437 patients who underwent hepatectomy for perihilar cholangiocarcinoma between 1995 and 2014. The prognostic abilities of the lymph node staging systems were assessed using the Harrell’s c-index. A Bayesian model was developed to identify the minimum TNLE. RESULTS One hundred and fifty-eight (36.2%) patients had lymph node metastasis. Median TNLE was 3 (interquartile range, 1 to 7). The LODDS had a slightly better prognostic performance than lymph node ratio and American Joint Committee on Cancer, in particular among patients with <4 TNLE (c-index = 0.568). For 2 TNLE, the Bayesian model showed a poor discriminatory ability to distinguish patients with favorable and poor prognosis. When TNLE was >2, the hazard ratio for N1 patients was statistically significant and the hazard ratio for N1 patients increased from 1.51 with 4 TNLE to 2.10 with 10 TNLE. Although the 5-year overall survival of N1 patients was only slightly affected by TNLE, the 5-year overall survival of N0 patients increased significantly with TNLE. CONCLUSIONS Perihilar cholangiocarcinoma patients undergoing radical resection should ideally have at least 4 lymph nodes harvested to be accurately staged. In addition, although LODDS performed better at determining prognosis among patients with <4 TNLE, both lymph node ratio and LODDS outperformed compared with American Joint Committee on Cancer N stage among
Optimized Sampling Strategies For Non-Proliferation Monitoring: Report
Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.
2015-10-20
Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.
Müller, Hans-Helge; Pahl, Roman; Schäfer, Helmut
2007-12-01
We propose optimized two-stage designs for genome-wide case-control association studies, using a hypothesis testing paradigm. To save genotyping costs, the complete marker set is genotyped in a sub-sample only (stage I). On stage II, the most promising markers are then genotyped in the remaining sub-sample. In recent publications, two-stage designs were proposed which minimize the overall genotyping costs. To achieve full design optimization, we additionally include sampling costs into both the cost function and the design optimization. The resulting optimal designs differ markedly from those optimized for genotyping costs only (partially optimized designs), and achieve considerable further cost reductions. Compared with partially optimized designs, fully optimized two-stage designs have higher first-stage sample proportion. Furthermore, the increment of the sample size over the one-stage design, which is necessary in two-stage designs in order to compensate for the loss of power due to partial genotyping, is less pronounced for fully optimized two-stage designs. In addition, we address the scenario where the investigator is interested to gain as much information as possible, however is restricted in terms of a budget. In that we develop two-stage designs that maximize the power under a certain cost constraint.
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
Optimal sampling of visual information for lightness judgments
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2013-01-01
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251
Sampling technique is important for optimal isolation of pharyngeal gonorrhoea.
Mitchell, M; Rane, V; Fairley, C K; Whiley, D M; Bradshaw, C S; Bissessor, M; Chen, M Y
2013-11-01
Culture is insensitive for the detection of pharyngeal gonorrhoea but isolation is pivotal to antimicrobial resistance surveillance. The aim of this study was to ascertain whether recommendations provided to clinicians (doctors and nurses) on pharyngeal swabbing technique could improve gonorrhoea detection rates and to determine which aspects of swabbing technique are important for optimal isolation. This study was undertaken at the Melbourne Sexual Health Centre, Australia. Detection rates among clinicians for pharyngeal gonorrhoea were compared before (June 2006-May 2009) and after (June 2009-June 2012) recommendations on swabbing technique were provided. Associations between detection rates and reported swabbing technique obtained via a clinician questionnaire were examined. The overall yield from testing before and after provision of the recommendations among 28 clinicians was 1.6% (134/8586) and 1.8% (264/15,046) respectively (p=0.17). Significantly higher detection rates were seen following the recommendations among clinicians who reported a change in their swabbing technique in response to the recommendations (2.1% vs. 1.5%; p=0.004), swabbing a larger surface area (2.0% vs. 1.5%; p=0.02), applying more swab pressure (2.5% vs. 1.5%; p<0.001) and a change in the anatomical sites they swabbed (2.2% vs. 1.5%; p=0.002). The predominant change in sites swabbed was an increase in swabbing of the oropharynx: from a median of 0% to 80% of the time. More thorough swabbing improves the isolation of pharyngeal gonorrhoea using culture. Clinicians should receive training to ensure swabbing is performed with sufficient pressure and that it covers an adequate area that includes the oropharynx.
Ghose, Arup K; Ott, Gregory R; Hudkins, Robert L
2017-01-18
At the discovery stage, it is important to understand the drug design concepts for a CNS drug compared to those for a non-CNS drug. Previously, we published on ideal CNS drug property space and defined in detail the physicochemical property distribution of CNS versus non-CNS oral drugs, the application of radar charting (a graphical representation of multiple physicochemical properties used during CNS lead optimization), and a recursive partition classification tree to differentiate between CNS- and non-CNS drugs. The objective of the present study was to further understand the differentiation of physicochemical properties between CNS and non-CNS oral drugs by the development and application of a new CNS scoring scheme: Technically Extended MultiParameter Optimization (TEMPO). In this multiparameter method, we identified eight key physicochemical properties critical for accurately assessing CNS druggability: (1) number of basic amines, (2) carbon-heteroatom (non-carbon, non-hydrogen) ratio, (3) number of aromatic rings, (4) number of chains, (5) number of rotatable bonds, (6) number of H-acceptors, (7) computed octanol/water partition coefficient (AlogP), and (8) number of nonconjugated C atoms in nonaromatic rings. Significant features of the CNS-TEMPO penalty score are the extension of the multiparameter approach to generate an accurate weight factor for each physicochemical property, the use of limits on both sides of the computed property space range during the penalty calculation, and the classification of CNS and non-CNS drug scores. CNS-TEMPO significantly outperformed CNS-MPO and the Schrödinger QikProp CNS parameter (QP_CNS) in evaluating CNS drugs and has been extensively applied in support of CNS lead optimization programs.
NASA Astrophysics Data System (ADS)
Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon
Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Zhou, Zhengwei; Bi, Xiaoming; Wei, Janet; Yang, Hsin-Jung; Dharmakumar, Rohan; Arsanjani, Reza; Bairey Merz, C Noel; Li, Debiao; Sharif, Behzad
2017-02-01
The presence of subendocardial dark-rim artifact (DRA) remains an ongoing challenge in first-pass perfusion (FPP) cardiac magnetic resonance imaging (MRI). We propose a free-breathing FPP imaging scheme with Cartesian sampling that is optimized to minimize the DRA and readily enables near-instantaneous image reconstruction. The proposed FPP method suppresses Gibbs ringing effects-a major underlying factor for the DRA-by "shaping" the underlying point spread function through a two-step process: 1) an undersampled Cartesian sampling scheme that widens the k-space coverage compared to the conventional scheme; and 2) a modified parallel-imaging scheme that incorporates optimized apodization (k-space data filtering) to suppress Gibbs-ringing effects. Healthy volunteer studies (n = 10) were performed to compare the proposed method against the conventional Cartesian technique-both using a saturation-recovery gradient-echo sequence at 3T. Furthermore, FPP imaging studies using the proposed method were performed in infarcted canines (n = 3), and in two symptomatic patients with suspected coronary microvascular dysfunction for assessment of myocardial hypoperfusion. Width of the DRA and the number of DRA-affected myocardial segments were significantly reduced in the proposed method compared to the conventional approach (width: 1.3 vs. 2.9 mm, P < 0.001; number of segments: 2.6 vs. 8.7; P < 0.0001). The number of slices with severe DRA was markedly lower for the proposed method (by 10-fold). The reader-assigned image quality scores were similar (P = 0.2), although the quantified myocardial signal-to-noise ratio was lower for the proposed method (P < 0.05). Animal studies showed that the proposed method can detect subendocardial perfusion defects and patient results were consistent with the gold-standard invasive test. The proposed free-breathing Cartesian FPP imaging method significantly reduces the prevalence of severe DRAs compared to the conventional approach
Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao
2016-12-01
Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.
Proskurnin, Mikhail A; Volkov, Mikhail E
2008-04-01
The optimization of the optical scheme design of a mode-mismatched dual-beam thermal-lens spectrometer for differential (dual-cell) measurements in a far-field mode using diffraction thermal-lens theory is carried out. A criterion for an expert estimation of the quality of the spectrometer design for differential thermal-lens measurements in analytical chemistry (sensitivity, low limits of detection, and quantification) is also developed. The theoretical calculations agree well with previous papers on differential thermal lensing. Using the example of iron(II) tris-(1,10-phenanthrolinate), it is shown that the blank signal compensation in differential thermal lens spectrometry provides a decrease in the limit of detection by an order of magnitude compared to the decrease in single-cell measurements. Using an artificial two-component mixture of ferroin and potassium dichromate, it is shown that dual-beam differential thermal lens spectrometry makes it possible to determine trace components against 900-fold excess amounts of interfering substances.
Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873
Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P
2015-07-01
Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates.
NASA Astrophysics Data System (ADS)
Back, Pär-Erik
2007-04-01
A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.
NASA Astrophysics Data System (ADS)
Shiau, Jenq-Tzong; Wu, Fu-Chun
2007-06-01
The temporal variations of natural flows are essential elements for preserving the ecological health of a river which are addressed in this paper by the environmental flow schemes that incorporate the intra-annual and interannual variability of the natural flow regime. We present an optimization framework to find the Pareto-optimal solutions for various flow schemes. The proposed framework integrates (1) the range of variability approach for evaluating the hydrologic alterations; (2) the standardized precipitation index approach for establishing the variation criteria for the wet, normal, and dry years; (3) a weir operation model for simulating the system of flows; and (4) a multiobjective optimization genetic algorithm for search of the Pareto-optimal solutions. The proposed framework is applied to the Kaoping diversion weir in Taiwan. The results reveal that the time-varying schemes incorporating the intra-annual variability in the environmental flow prescriptions promote the ecosystem and human needs fitness. Incorporation of the interannual flow variability using different criteria established for three types of water year further promotes both fitnesses. The merit of incorporating the interannual variability may be superimposed on that of incorporating only the intra-annual flow variability. The Pareto-optimal solutions searched with a limited range of flows replicate satisfactorily those obtained with a full search range. The limited-range Pareto front may be used as a surrogate of the full-range one if feasible prescriptions are to be found among the regular flows.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John
2016-11-01
Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Optimal sampling efficiency in Monte Carlo sampling with an approximate potential
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.
Ogungbenro, Kayode; Aarons, Leon
2009-01-01
This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.
Sample Optimization for Five Plant-Parasitic Nematodes in an Alfalfa Field
Goodell, P. B.; Ferris, H.
1981-01-01
A data base representing nematode counts and soil weight from 1,936 individual soil cores taken from a 7-ha alfalfa field was used to investigate sample optimization for five plant-parasitic nematodes: Meloidogyne arenaria, Pratylenchus minyus, Merlinius brevidens, Helicotylenchus digonicus, and Paratrichodorus minor. Sample plans were evaluated by the accuracy and reliability of their estimation of the population and by the cost of collecting, processing, and counting the samples. Interactive FORTRAN programs were constructed to simulate four collecting patterns: random; division of the field into square sub-units (cells); and division of the field into rectangular sub-traits (strips) running in two directions. Depending on the pattern, sample numbers varied from 1 to 25 with each sample representing from 1 to 50 cores. Each pattern, sample, and core combination was replicated 50 times. Strip stratification north/south was the most optimal sampling pattern in this field because it isolated a streak of fine-textured soil. The mathematical optimmn was not found because of data range limitations. When practical economic time constraints (5 hr to collect, process, and count nematode samples) are placed on the optimization process, all species estimates deviate no more than 25 % from the true mean. If accuracy constraints are placed on the process (no more than 15% deviation from true field mean), all species except Merlinius required less than 5 hr to complete the sample process. PMID:19300768
Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long
2017-01-01
A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without
NASA Astrophysics Data System (ADS)
Lvovs, Aleksandrs; Mutule, Anna
2011-01-01
The paper gives results of technical and economical calculations performed for estimation of validity of usage remote operated disconnectors for most commonly used 110kV switchgears from optimal reliability level point of view. Paper contains description of performed technical calculations - calculations of 110kV switchgear schemes' reliability level depending on type of disconnectors installed, and economical calculations, that are related with additional costs of Transmission System Operator and changes in total customer costs of power supply interruptions.
GLLS for optimally sampled continuous dynamic system modeling: theory and algorithm.
Feng, D; Ho, D; Lau, K K; Siu, W C
1999-04-01
The original generalized linear least squares (GLLS) algorithm was developed for non-uniformly sampled biomedical system parameter estimation using finely sampled instantaneous measurements (D. Feng, S.C. Huang, Z. Wang, D. Ho, An unbiased parametric imaging algorithm for non-uniformly sampled biomedical system parameter estimation, IEEE Trans. Med. Imag. 15 (1996) 512-518). This algorithm is particularly useful for image-wide generation of parametric images with positron emission tomography (PET), as it is computationally efficient and statistically reliable (D. Feng, D. Ho, Chen, K., L.C. Wu, J.K. Wang, R.S. Liu, S.H. Yeh, An evaluation of the algorithms for determining local cerebral metabolic rates of glucose using positron emission tomography dynamic data, IEEE Trans. Med. Imag. 14 (1995) 697-710). However, when dynamic PET image data are sampled according to the optimal image sampling schedule (OISS) to reduce memory and storage space (X. Li, D. Feng, K. Chen, Optimal image sampling schedule: A new effective way to reduce dynamic image storage space and functional image processing time, IEEE Trans. Med. Imag. 15 (1996) 710-718), only a few temporal image frames are recorded (e.g. only four images are recorded for the four parameter fluoro-deoxy-glucose (FDG) model). These image frames are recorded in terms of accumulated radio-activity counts and as a result, the direct application of GLLS is not reliable as instantaneous measurement samples can no longer be approximated by averaging of accumulated measurements over the sampling intervals. In this paper, we extend GLLS to OISS-GLLS which deals with the fewer accumulated measurement samples obtained from OISS dynamic systems. The theory and algorithm of this new technique are formulated and studied extensively. To investigate statistical reliability and computational efficiency of OISS-GLLS, a simulation study using dynamic PET data was performed. OISS-GLLS using 4-measurement samples was compared to the non
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
NASA Astrophysics Data System (ADS)
Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.
2016-11-01
To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.
An algorithm for the weighting matrices in the sampled-data optimal linear regulator problem
NASA Technical Reports Server (NTRS)
Armstrong, E. S.; Caglayan, A. K.
1976-01-01
The sampled-data optimal linear regulator problem provides a means whereby a control designer can use an understanding of continuous optimal regulator design to produce a digital state variable feedback control law which satisfies continuous system performance specifications. A basic difficulty in applying the sampled-data regulator theory is the requirement that certain digital performance index weighting matrices, expressed as complicated functions of system matrices, be computed. Infinite series representations are presented for the weighting matrices of the time-invariant version of the optimal linear sampled-data regulator problem. Error bounds are given for estimating the effect of truncating the series expressions after a finite number of terms, and a method is described for their computer implementation. A numerical example is given to illustrate the results.
Sample size calculation for testing differences between cure rates with the optimal log-rank test.
Wu, Jianrong
2017-01-01
In this article, sample size calculations are developed for use when the main interest is in the differences between the cure rates of two groups. Following the work of Ewell and Ibrahim, the asymptotic distribution of the weighted log-rank test is derived under the local alternative. The optimal log-rank test under the proportional distributions alternative is discussed, and sample size formulas for the optimal and standard log-rank tests are derived. Simulation results show that the proposed formulas provide adequate sample size estimation for trial designs and that the optimal log-rank test is more efficient than the standard log-rank test, particularly when both cure rates and percentages of censoring are small.
Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-01-01
Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Chen, Yewei; Lu, Jinmiao; Dong, Min; Wu, Dan; Zhu, Yiqing; Li, Qin; Chen, Chao; Li, Zhiping
2016-12-01
Population pharmacokinetic (popPK) analyses for piperacillin/tazobactam in neonates and infants of less than 2 months of age have been performed by our group previously. The results indicate that a dose of 44.44/5.56 mg/kg piperacillin/tazobactam every 8 or 12 h may not be enough for controlling infection in this population. In order to determine the appropriate dosing regimen and to provide a rationale for the development of dosing guidelines suitable for this population, further popPK studies of piperacillin/tazobactam would need to be conducted. The aim of the present study was to determine the appropriate dosing regimen and optimal sampling schedules in neonates and infants of less than 2 months of age. Pharmacodynamic profiling of piperacillin using Monte Carlo simulation was performed to explore the target attainment probability of different dosing regimens for infections caused by different isolated pathogens. D-optimal designs for piperacillin and tazobactam were conducted separately, and the times that overlapped were chosen as the final sampling scheme for future popPK studies in neonates and young infants of less than 2 months of age. Our findings revealed that compared to the current empirical piperacillin/tazobactam dose regimen (50 mg/kg every 12 h by 5-min infusion in our hospital), the clinical outcome could be improved by increasing doses, increasing administration frequency, and prolonging intravenous infusion in neonates and infants of less than 2 months of age. The following optimal sampling windows were chosen as the final sampling scheme: 0.1-0.11, 0.26-0.29, 0.97-2.62, and 7.95-11.9 h administered every 12 h with 5-min infusion; 0.1-0.12, 0.39-0.56, 2.86-4.95, and 8.91-11.8 h administered every 12 h with 3-h infusion; 0.1-0.11, 0.22-0.29, 0.91-1.96, and 5.56-7.93 h administered every 8 h with 5-min infusion; 0.1-0.11, 0.38-0.48, 2.54-3.82, and 6.86-7.93 h administered every 8 h with 3-h infusion; 0.1-0.11, 0.25-0.28, 0
Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y
2014-03-01
Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.
Alba, Anna; Morrison, Robert E; Cheeran, Ann; Rovira, Albert; Alvarez, Julio; Perez, Andres M
2017-01-01
Porcine reproductive and respiratory syndrome virus (PRRSv) infection causes a devastating economic impact to the swine industry. Active surveillance is routinely conducted in many swine herds to demonstrate freedom from PRRSv infection. The design of efficient active surveillance sampling schemes is challenging because optimum surveillance strategies may differ depending on infection status, herd structure, management, or resources for conducting sampling. Here, we present an open web-based application, named 'OptisampleTM', designed to optimize herd sampling strategies to substantiate freedom of infection considering also costs of testing. In addition to herd size, expected prevalence, test sensitivity, and desired level of confidence, the model takes into account the presumed risk of pathogen introduction between samples, the structure of the herd, and the process to select the samples over time. We illustrate the functionality and capacity of 'OptisampleTM' through its application to active surveillance of PRRSv in hypothetical swine herds under disparate epidemiological situations. Diverse sampling schemes were simulated and compared for each herd to identify effective strategies at low costs. The model results show that to demonstrate freedom from disease, it is important to consider both the epidemiological situation of the herd and the sample selected. The approach illustrated here for PRRSv may be easily extended to other animal disease surveillance systems using the web-based application available at http://stemma.ahc.umn.edu/optisample.
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater
Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal
2016-01-01
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.
Zahid, Erum; Hussain, Ijaz; Spöck, Gunter; Faisal, Muhammad; Shabbir, Javid; M AbdEl-Salam, Nasser; Hussain, Tajammal
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design.
NASA Astrophysics Data System (ADS)
Kiesewetter, Simon; Drummond, Peter D.
2017-03-01
A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.
NASA Astrophysics Data System (ADS)
Kong, Weijing; Wan, Yuhang; Du, Kun; Zhao, Wenhui; Wang, Shuang; Zheng, Zheng
2016-11-01
The reflected intensity change of the Bloch-surface-wave (BSW) resonance influenced by the loss of a truncated onedimensional photonic crystal structure is numerically analyzed and studied in order to enhance the sensitivity of the Bloch-surface-wave-based sensors. The finite truncated one-dimensional photonic crystal structure is designed to be able to excite BSW mode for water (n=1.33) as the external medium and for p-polarized plane wave incident light. The intensity interrogation scheme which can be operated on a typical Kretschmann prism-coupling configuration by measuring the reflected intensity change of the resonance dip is investigated to optimize the sensitivity. A figure of merit (FOM) is introduced to measure the performance of the one-dimensional photonic crystal multilayer structure under the scheme. The detection sensitivities are calculated under different device parameters with a refractive index change corresponding to different solutions of glycerol in de-ionized (DI)-water. The results show that the intensity sensitivity curve varies similarly with the FOM curve and the sensitivity of the Bloch-surface-wave sensor is greatly affected by the device loss, where an optimized loss value can be got. For the low-loss BSW devices, the intensity interrogation sensing sensitivity may drop sharply from the optimal value. On the other hand, the performance of the detection scheme is less affected by the higher device loss. This observation is in accordance with BSW experimental sensing demonstrations as well. The results obtained could be useful for improving the performance of the Bloch-surface-wave sensors for the investigated sensing scheme.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.
Kumar, Naresh
2009-02-01
This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.
Hatjimihail, Aristides T
2009-06-09
An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-05
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.
Shen, Xiong; Zong, Chao; Zhang, Guoqiang
2012-01-01
Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2017-09-27
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...Seattle, USA Johannes O. Royset Associate Professor, Operations Research Dept., Naval Postgraduate School , Monterey, USA ABSTRACT: Engineering design is...criterion is usually the failure probability. In this paper, we examine the buffered failure probability as an attractive alternative to the failure
Validation of genetic algorithm-based optimal sampling for ocean data assimilation
NASA Astrophysics Data System (ADS)
Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.
2016-10-01
Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.
NASA Astrophysics Data System (ADS)
Qi, Shengqi; Hou, Deyi; Luo, Jian
2017-09-01
This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.
A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization
Tian, Shulin; Yang, Chenglin; Liu, Cheng
2016-01-01
The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424
Optimal sampling of antipsychotic medicines: a pharmacometric approach for clinical practice
Perera, Vidya; Bies, Robert R; Mo, Gary; Dolton, Michael J; Carr, Vaughan J; McLachlan, Andrew J; Day, Richard O; Polasek, Thomas M; Forrest, Alan
2014-01-01
Aim To determine optimal sampling strategies to allow the calculation of clinical pharmacokinetic parameters for selected antipsychotic medicines using a pharmacometric approach. Methods This study utilized previous population pharmacokinetic parameters of the antipsychotic medicines aripiprazole, clozapine, olanzapine, perphenazine, quetiapine, risperidone (including 9-OH risperidone) and ziprasidone. d-optimality was utilized to identify time points which accurately predicted the pharmacokinetic parameters (and expected error) of each drug at steady-state. A standard two stage population approach (STS) with MAP-Bayesian estimation was used to compare area under the concentration–time curves (AUC) generated from sparse optimal time points and rich extensive data. Monte Carlo Simulation (MCS) was used to simulate 1000 patients with population variability in pharmacokinetic parameters. Forward stepwise regression analysis was used to determine the most predictive time points of the AUC for each drug at steady-state. Results Three optimal sampling times were identified for each antipsychotic medicine. For aripiprazole, clozapine, olanzapine, perphenazine, risperidone, 9-OH risperidone, quetiapine and ziprasidone the CV% of the apparent clearance using optimal sampling strategies were 19.5, 8.6, 9.5, 13.5, 12.9, 10.0, 16.0 and 10.7, respectively. Using the MCS and linear regression approach to predict AUC, the recommended sampling windows were 16.5–17.5 h, 10–11 h, 23–24 h, 19–20 h, 16.5–17.5 h, 22.5–23.5 h, 5–6 h and 5.5–6.5 h, respectively. Conclusion This analysis provides important sampling information for future population pharmacokinetic studies and clinical studies investigating the pharmacokinetics of antipsychotic medicines. PMID:24773369
Hirt, Déborah; Van Overmeire, Bart; Treluyer, Jean-Marc; Langhendries, Jean-Paul; Marguglio, Arnaud; Eisinger, Mark J; Schepens, Paul; Urien, Saïk
2008-01-01
AIMS To describe ibuprofen pharmacokinetics in preterm neonates with patent ductus arteriosus (PDA) and to establish relationships between doses, plasma concentrations and ibuprofen efficacy and safety. METHODS Sixty-six neonates were treated with median daily doses of 10, 5 and 5 mg kg−1 of ibuprofen-lysine by intravenous infusion on 3 consecutive days. A population pharmacokinetic model was developed with NONMEM. Bayesian individual pharmacokinetic estimates were used to calculate areas under the curve (AUC) and to simulate doses. A logistic regression was performed on PDA closure. RESULTS Ibuprofen pharmacokinetics were described by a one-compartment model with linear elimination. Mean population pharmacokinetic estimates with corresponding intersubject variabilities (%) were: elimination clearance CL = 9.49 ml h−1 (62%) and volume of distribution V = 375 ml (72%). Ibuprofen CL significantly increased with postnatal age (PNA): CL = 9.49*(PNA/96.3)1.49. AUC after the first dose (AUC1D), the sum of AUC after the three doses (AUC3D) and gestational age were significantly higher in 57 neonates with closing PDA than in nine neonates without PDA closure (P = 0.02). PDA closure was observed in 50% of the neonates when AUC1D < 600 mg l−1 h (or AUC3D < 900 mg l−1 h) and in 91% when AUC1D > 600 mg l−1 h (or AUC3D > 900 mg l−1 h) (P = 0.006). No correlation between AUC and side-effects could be demonstrated. CONCLUSIONS To achieve these optimal AUCs, irrespective of gestational age, three administrations at 24 h intervals are recommended of 10, 5, 5 mg kg−1 for neonates younger than 70 h, 14, 7, 7 mg kg−1 for neonates between 70 and 108 h and 18, 9, 9 mg kg−1 for neonates between 108 and 180 h. WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT Ibuprofen is a nonsteroidal anti-inflammatory agent that induces closure of the patent ductus arteriosus in neonates. Few studies of ibuprofen pharmacokinetics have been performed and were limited to small groups of preterm
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
NASA Astrophysics Data System (ADS)
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that
Optimization of low-background alpha spectrometers for analysis of thick samples.
Misiaszek, M; Pelczar, K; Wójcik, M; Zuzel, G; Laubenstein, M
2013-11-01
Results of alpha spectrometric measurements performed deep underground and above ground with and without active veto show that the underground measurement of thick samples is the most sensitive method due to significant reduction of the muon-induced background. In addition, the polonium diffusion requires for some samples an appropriate selection of an energy region in the registered spectrum. On the basis of computer simulations the best counting conditions are selected for a thick lead sample in order to optimize the detection limit. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
NASA Astrophysics Data System (ADS)
Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal
Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Huang, Yanqin; Li, Qilong; Ge, Weiting; Hu, Yue; Cai, Shanrong; Yuan, Ying; Zhang, Suzhan; Zheng, Shu
2016-03-01
The fecal immunochemical test (FIT) that quantifies hemoglobin concentration is reported to be better than qualitative FIT and the reason for its superiority has not been interpreted. To evaluate and understand the superiority of quantitative FIT, a representative randomly selected population (n=2355) in Jiashan County, China, aged 40-74 years was invited for colorectal cancer screening in 2012. Three fecal samples were collected from each participant by one optimized and two common sampling devices, and then tested by both quantitative and qualitative FITs. Colonoscopy was provided independently to all participants. The performances of five featured screening strategies were compared. A total of 1020 participants were eligible. For screening advanced neoplasia, the positive predictive value (PPV) and the specificity of the strategy that tested one sample dissolved in an optimized device by quantitative FIT [PPV=40.8%, 95% confidence interval (CI): 27.1-54.6; specificity=96.8%, 95% CI: 95.7-98.0] were significantly improved over the strategy that tested one sample dissolved in the common device by qualitative FIT (PPV=14.1%, 95% CI: 8.2-19.9; specificity=87.9%, 95% CI: 85.8-89.9), whereas the sensitivity did not differ (39.2 and 37.3%, P=0.89). Similar disparity in performance was observed between the strategies using qualitative FIT to test one sample dissolved in optimized (PPV=29.5%, 95% CI: 18.1-41.0; specificity=95.3%, 95% CI: 94.0-96.7) versus common sampling devices. High sensitivity for advanced neoplasia was observed in the strategy that tested two samples by qualitative FIT (52.9%, 95% CI: 39.2-66.6). Quantitative FIT is better than qualitative FIT for screening advanced colorectal neoplasia. However, the fecal sampling device might contribute most significantly toward the superiority of quantitative FIT.
A method to optimize sampling locations for measuring indoor air distributions
NASA Astrophysics Data System (ADS)
Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan
2015-02-01
Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
In this paper we provide a thorough, rigorous theoretical framework to assess optimality guarantees of sampling-based algorithms for drift control systems: systems that, loosely speaking, can not stop instantaneously due to momentum. We exploit this framework to design and analyze a sampling-based algorithm (the Differential Fast Marching Tree algorithm) that is asymptotically optimal, that is, it is guaranteed to converge, as the number of samples increases, to an optimal solution. In addition, our approach allows us to provide concrete bounds on the rate of this convergence. The focus of this paper is on mixed time/control energy cost functions and on linear affine dynamical systems, which encompass a range of models of interest to applications (e.g., double-integrators) and represent a necessary step to design, via successive linearization, sampling-based and provably-correct algorithms for non-linear drift control systems. Our analysis relies on an original perturbation analysis for two-point boundary value problems, which could be of independent interest. PMID:26997749
Optimizing Spatio-Temporal Sampling Designs of Synchronous, Static, or Clustered Measurements
NASA Astrophysics Data System (ADS)
Helle, Kristina; Pebesma, Edzer
2010-05-01
When sampling spatio-temporal random variables, the cost of a measurement may differ according to the setup of the whole sampling design: static measurements, i.e. repeated measurements at the same location, synchronous measurements or clustered measurements may be cheaper per measurement than completely individual sampling. Such "grouped" measurements may however not be as good as individually chosen ones because of redundancy. Often, the overall cost rather than the total number of measurements is fixed. A sampling design with grouped measurements may allow for a larger number of measurements thus outweighing the drawback of redundancy. The focus of this paper is to include the tradeoff between the number of measurements and the freedom of their location in sampling design optimisation. For simple cases, optimal sampling designs may be fully determined. To predict e.g. the mean over a spatio-temporal field having known covariance, the optimal sampling design often is a grid with density determined by the sampling costs [1, Ch. 15]. For arbitrary objective functions sampling designs can be optimised relocating single measurements, e.g. by Spatial Simulated Annealing [2]. However, this does not allow to take advantage of lower costs when using grouped measurements. We introduce a heuristic that optimises an arbitrary objective function of sampling designs, including static, synchronous, or clustered measurements, to obtain better results at a given sampling budget. Given the cost for a measurement, either within a group or individually, the algorithm first computes affordable sampling design configurations. The number of individual measurements as well as kind and number of grouped measurements are determined. Random locations and dates are assigned to the measurements. Spatial Simulated Annealing is used on each of these initial sampling designs (in parallel) to improve them. In grouped measurements either the whole group is moved or single measurements within the
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D
2010-01-01
Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.
Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik
2016-04-01
The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.
NASA Astrophysics Data System (ADS)
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L; Date, Hiroyuki
2015-11-01
Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose-volume histograms for tumor and normal tissues of organs around the tumor. Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose-volume histogram. It was found that the optimization of fractionation scheme incorporating the dose-volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8-32 fractions with a daily dose of 2.2-6.3 Gy. It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose-volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki
2015-11-15
Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
Sample volume optimization for radon-in-water detection by liquid scintillation counting.
Schubert, Michael; Kopitz, Juergen; Chałupnik, Stanisław
2014-08-01
Radon is used as environmental tracer in a wide range of applications particularly in aquatic environments. If liquid scintillation counting (LSC) is used as detection method the radon has to be transferred from the water sample into a scintillation cocktail. Whereas the volume of the cocktail is generally given by the size of standard LSC vials (20 ml) the water sample volume is not specified. Aim of the study was an optimization of the water sample volume, i.e. its minimization without risking a significant decrease in LSC count-rate and hence in counting statistics. An equation is introduced, which allows calculating the ²²²Rn concentration that was initially present in a water sample as function of the volumes of water sample, sample flask headspace and scintillation cocktail, the applicable radon partition coefficient, and the detected count-rate value. It was shown that water sample volumes exceeding about 900 ml do not result in a significant increase in count-rate and hence counting statistics. On the other hand, sample volumes that are considerably smaller than about 500 ml lead to noticeably lower count-rates (and poorer counting statistics). Thus water sample volumes of about 500-900 ml should be chosen for LSC radon-in-water detection, if 20 ml vials are applied.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude (M i ), or, for a small subset of our sample, M i and color (NUV - i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M i and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples
Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
Sturkenboom, Marieke G. G.; Mulder, Leonie W.; de Jager, Arthur; van Altena, Richard; Aarnoutse, Rob E.; de Lange, Wiel C. M.; Proost, Johannes H.; Kosterink, Jos G. W.; van der Werf, Tjip S.
2015-01-01
Rifampin, together with isoniazid, has been the backbone of the current first-line treatment of tuberculosis (TB). The ratio of the area under the concentration-time curve from 0 to 24 h (AUC0–24) to the MIC is the best predictive pharmacokinetic-pharmacodynamic parameter for determinations of efficacy. The objective of this study was to develop an optimal sampling procedure based on population pharmacokinetics to predict AUC0–24 values. Patients received rifampin orally once daily as part of their anti-TB treatment. A one-compartmental pharmacokinetic population model with first-order absorption and lag time was developed using observed rifampin plasma concentrations from 55 patients. The population pharmacokinetic model was developed using an iterative two-stage Bayesian procedure and was cross-validated. Optimal sampling strategies were calculated using Monte Carlo simulation (n = 1,000). The geometric mean AUC0–24 value was 41.5 (range, 13.5 to 117) mg · h/liter. The median time to maximum concentration of drug in serum (Tmax) was 2.2 h, ranging from 0.4 to 5.7 h. This wide range indicates that obtaining a concentration level at 2 h (C2) would not capture the peak concentration in a large proportion of the population. Optimal sampling using concentrations at 1, 3, and 8 h postdosing was considered clinically suitable with an r2 value of 0.96, a root mean squared error value of 13.2%, and a prediction bias value of −0.4%. This study showed that the rifampin AUC0–24 in TB patients can be predicted with acceptable accuracy and precision using the developed population pharmacokinetic model with optimal sampling at time points 1, 3, and 8 h. PMID:26055359
Morley, Shannon M.; Seiner, Brienne N.; Finn, Erin C.; Greenwood, Lawrence R.; Smith, Steven C.; Gregory, Stephanie J.; Haney, Morgan M.; Lucas, Dawn D.; Arrigo, Leah M.; Beacham, Tere A.; Swearingen, Kevin J.; Friese, Judah I.; Douglas, Matthew; Metz, Lori A.
2015-05-01
Mixed fission and activation materials resulting from various nuclear processes and events contain a wide range of isotopes for analysis spanning almost the entire periodic table. In some applications such as environmental monitoring, nuclear waste management, and national security a very limited amount of material is available for analysis and characterization so an integrated analysis scheme is needed to measure multiple radionuclides from one sample. This work describes the production of a complex synthetic sample containing fission products, activation products, and irradiated soil and determines the percent recovery of select isotopes through the integrated chemical separation scheme. Results were determined using gamma energy analysis of separated fractions and demonstrate high yields of Ag (76 ± 6%), Au (94 ± 7%), Cd (59 ± 2%), Co (93 ± 5%), Cs (88 ± 3%), Fe (62 ± 1%), Mn (70 ± 7%), Np (65 ± 5%), Sr (73 ± 2%) and Zn (72 ± 3%). Lower yields (< 25%) were measured for Ga, Ir, Sc, and W. Based on the results of this experiment, a complex synthetic sample can be prepared with low atom/fission ratios and isotopes of interest accurately and precisely measured following an integrated chemical separation method.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
Tavakoli, Rouhollah
2016-01-01
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.
Optimal adaptive group sequential design with flexible timing of sample size determination.
Cui, Lu; Zhang, Lanju; Yang, Bo
2017-04-26
Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
Frazier, M T; Finley, J; Harkness, W; Rajotte, E G
2000-06-01
The introduction of parasitic honey bee mites, the tracheal mite, Acarapis woodi (Rennie) in 1984 and the Varroa mite, Varroa jacobsoni, in 1987, has dramatically increased the winter mortality of honey bee, Apis mellifera L., colonies in many areas of the United States. Some beekeepers have minimized their losses by routinely treating their colonies with menthol, currently the only Environmental Protection Agency-approved and available chemical for tracheal mite control. Menthol is also expensive and can interfere with honey harvesting. Because of inadequate sampling techniques and a lack of information concerning treatment, this routine treatment strategy has increased the possibility that tracheal mites will develop resistance to menthol. It is important to establish economic thresholds and treat colonies with menthol only when treatment is warranted rather than treating all colonies regardless of infestation level. The use of sequential sampling may reduce the amount of time and effort expended in examining individual colonies and determining if treatment is necessary. Sequential sampling also allows statistically based estimates of the percentage of bees in standard Langstroth hives infested with mites while controlling for the possibility of incorrectly assessing the amount of infestation. On the average, sequential sampling plans require fewer observations (bees) to reach a decision for specified probabilities of type I and type II errors than are required for fixed sampling plans, especially when the proportion of infested bees is either very low or very high. We developed a sequential sampling decision plan to allow the user to choose specific economic injury levels and the probability of making type I and type II errors which can result inconsiderable savings in time, labor and expense.
A two-stage method to determine optimal product sampling considering dynamic potential market.
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.
JR Bontha; GR Golcar; N Hannigan
2000-08-29
The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.
Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno
2010-07-01
The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.
Optimized Sample Preparation of Endoscopic (ePFT) Collected Pancreatic Fluid for SDS-PAGE Analysis
Paulo, Joao A.; Lee, Linda S.; Wu, Bechien; Repas, Kathryn; Banks, Peter A.; Conwell, Darwin L.; Steen, Hanno
2011-01-01
The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically-relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (ePFT). Using SDS-PAGE protein profiling, we investigate (1) precipitation techniques to maximize protein extraction, (2) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (3) effects of multiple freeze-thaw cycles on protein stability, and (4) the utility of protease inhibitors. Our experiments revealed that trichloroacetic acid (TCA) precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23°C and 37°C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid. PMID:20589857
Beliaeff, B.; Claisse, D.; Smith, P.J.
1995-12-31
In the French Monitoring Network, trace element and organic concentration in biota has been measured for 15 years on a quarterly basis at over 80 sites scattered along the French coastline. A reduction in the sampling effort may be needed as a result of budget restrictions. A constant budget, however, would allow the advancement of certain research and development projects, such as the feasibility of new chemical analysis. The basic problem confronting the program sampling design optimization is finding optimal numbers of sites in a given non-heterogeneous area and of sampling events within a year at each site. First, they determine a site specific cost function integrating analysis, personnel, and computer costs. Then, within-year and between-site variance components are estimated from the results of a linear model which includes a seasonal component. These two steps provide a cost-precision optimum for each contaminant. An example is given using the data from the 4 sites of the Loire estuary. Over all sites, significant `U`-shaped trends are estimated for Pb, PCBs, {Sigma}DDT and {alpha}-HCH, while PAHs show a significant inverted `U`-shaped curve. For most chemicals the within-year variance appears to be much higher than the between sites variance. This leads to the conclusion that, for this case, reducing the number of sites by two is preferable economically and in terms of monitoring efficiency to reducing the sampling frequency by the same factor. Further implications for the French Monitoring Network are discussed.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Optimized Ar(+)-ion milling procedure for TEM cross-section sample preparation.
Dieterle, Levin; Butz, Benjamin; Müller, Erich
2011-11-01
High-quality samples are indispensable for every reliable transmission electron microscopy (TEM) investigation. In order to predict optimized parameters for the final Ar(+)-ion milling preparation step, topographical changes of symmetrical cross-section samples by the sputtering process were modeled by two-dimensional Monte-Carlo simulations. Due to its well-known sputtering yield of Ar(+)-ions and its easiness in mechanical preparation Si was used as model system. The simulations are based on a modified parameterized description of the sputtering yield of Ar(+)-ions on Si summarized from literature. The formation of a wedge-shaped profile, as commonly observed during double-sector ion milling of cross-section samples, was reproduced by the simulations, independent of the sputtering angle. Moreover, the preparation of wide, plane parallel sample areas by alternating single-sector ion milling is predicted by the simulations. These findings were validated by a systematic ion-milling study (single-sector vs. double-sector milling at various sputtering angles) using Si cross-section samples as well as two other material-science examples. The presented systematic single-sector ion-milling procedure is applicable for most Ar(+)-ion mills, which allow simultaneous milling from both sides of a TEM sample (top and bottom) in an azimuthally restricted sector perpendicular to the central epoxy line of that cross-sectional TEM sample. The procedure is based on the alternating milling of the two halves of the TEM sample instead of double-sector milling of the whole sample. Furthermore, various other practical aspects are issued like the dependency of the topographical quality of the final sample on parameters like epoxy thickness and incident angle.
Analysis of the optimal sampling rate for state estimation in sensor networks with delays.
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo
2017-03-27
When addressing the problem of state estimation in sensor networks, the effects of communications on estimator performance are often neglected. High accuracy requires a high sampling rate, but this leads to higher channel load and longer delays, which in turn worsens estimation performance. This paper studies the problem of determining the optimal sampling rate for state estimation in sensor networks from a theoretical perspective that takes into account traffic generation, a model of network behaviour and the effect of delays. Some theoretical results about Riccati and Lyapunov equations applied to sampled systems are derived, and a solution was obtained for the ideal case of perfect sensor information. This result is also interesting for non-ideal sensors, as in some cases it works as an upper bound of the optimisation solution.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil.
Silvestri, Erin E; Feldhake, David; Griffin, Dale; Lisle, John; Nichols, Tonya L; Shah, Sanjiv R; Pemberton, Adin; Schaefer, Frank W
2016-11-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries. Copyright © 2016. Published by Elsevier B.V.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil
Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W
2016-01-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Stauffer, Eric
2006-09-01
This paper reviews the literature on the analysis of vegetable (and animal) oil residues from fire debris samples. The examination sequence starts with the solvent extraction of the residues from the substrate. The extract is then prepared for instrumental analysis by derivatizing fatty acids (FAs) into fatty acid methyl esters. The analysis is then carried out by gas chromatography or gas chromatography-mass spectrometry. The interpretation of the results is a difficult operation seriously limited by a lack of research on the subject. The present data analysis scheme utilizes FA ratios to determine the presence of vegetable oils and their propensity to self-heat and possibly, to spontaneously ignite. Preliminary work has demonstrated that it is possible to detect chemical compounds specific to an oil that underwent spontaneous ignition. Guidelines to conduct future research in the analysis of vegetable oil residues from fire debris samples are also presented.
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2017-01-01
Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…
Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon
2005-12-01
The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.
Madisch, Ijad; Wölfel, Roman; Harste, Gabi; Pommer, Heidi; Heim, Albert
2006-09-01
Precise typing of human adenoviruses (HAdV) is fundamental for epidemiology and the detection of infection chains. As only few of the 51 adenovirus types are associated with life- threatening disseminated diseases in immunodeficient patients, detection of one of these types may have prognostic value and lead to immediate therapeutic intervention. A recently published molecular typing scheme consisting of two steps (sequencing of a generic PCR product closely adjacent to loop 1 of the main neutralization determinant epsilon, and for species HAdV-B, -C, and -D the sequencing of loop 2 [Madisch et al., 2005]) was applied to 119 clinical samples. HAdV DNA was typed unequivocally even in cases of culture negative samples, for example in immunodeficient patients before HAdV causes high virus loads and disseminated disease. Direct typing results demonstrated the predominance of HAdV-1, -2, -5, and -31 in immunodeficient patients suggesting the significance of the persistence of these viruses for the pathogenesis of disseminated disease. In contrast, HAdV-3 predominated in immunocompetent patients and cocirculation of four subtypes was demonstrated. Typing of samples from a conjunctivitis outbreak in multiple military barracks demonstrated various HAdV types (2, 4, 8, 19) and not the suspected unique adenovirus etiology. This suggests that our molecular typing scheme will be also useful for epidemiological investigations. In conclusion, our two-step molecular typing system will permit the precise and rapid typing of clinical HAdV isolates and even of HAdV DNA in clinical samples without the need of time-consuming virus isolation prior to typing.
Noblet, Vincent; Heinrich, Christian; Heitz, Fabrice; Armspach, Jean-Paul
2005-05-01
This paper deals with topology preservation in three-dimensional (3-D) deformable image registration. This work is a nontrivial extension of, which addresses the case of two-dimensional (2-D) topology preserving mappings. In both cases, the deformation map is modeled as a hierarchical displacement field, decomposed on a multiresolution B-spline basis. Topology preservation is enforced by controlling the Jacobian of the transformation. Finding the optimal displacement parameters amounts to solving a constrained optimization problem: The residual energy between the target image and the deformed source image is minimized under constraints on the Jacobian. Unlike the 2-D case, in which simple linear constraints are derived, the 3-D B-spline-based deformable mapping yields a difficult (until now, unsolved) optimization problem. In this paper, we tackle the problem by resorting to interval analysis optimization techniques. Care is taken to keep the computational burden as low as possible. Results on multipatient 3-D MRI registration illustrate the ability of the method to preserve topology on the continuous image domain.
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.
2016-06-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.
Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.
Experiments Optimized for Magic Angle Spinning and Oriented Sample Solid-State NMR of Proteins
Das, Bibhuti B.; Lin, Eugene C.; Opella, Stanley J.
2013-01-01
Structure determination by solid-state NMR of proteins is rapidly advancing as result of recent developments of samples, experimental methods, and calculations. There are a number of different solid-state NMR approaches that utilize stationary, aligned samples or magic angle spinning of unoriented ‘powder’ samples, and depending on the sample and the experimental method can emphasize the measurement of distances or angles, ideally both, as sources of structural constraints. Multi-dimensional correlation spectroscopy of low-gamma nuclei such as 15N and 13C is an important step for making resonance assignments and measurements of angular restraints in membrane proteins. However, the efficiency of coherence transfer predominantly depends upon the strength of dipole-dipole interaction, and this can vary from site to site and between sample alignments, for example, during the mixing of 13C and 15N magnetization in stationary aligned and in magic angle spinning samples. Here, we demonstrate that the efficiency of polarization transfer can be improved by using adiabatic demagnetization and remagnetization techniques on stationary aligned samples; and proton assisted insensitive nuclei cross-polarization in magic angle sample spinning samples. Adiabatic cross-polarization technique provides an alternative mechanism for spin-diffusion experiments correlating 15N/15N and 15N/13C chemical shifts over large distances. Improved efficiency in cross-polarization with 40% – 100% sensitivity enhancements are observed in proteins and single crystals, respectively. We describe solid-state NMR experimental techniques that are optimal for membrane proteins in liquid crystalline phospholipid bilayers under physiological conditions. The techniques are illustrated with data from both single crystals of peptides and of membrane proteins in phospholipid bilayers. PMID:24044695
NASA Astrophysics Data System (ADS)
Mayer, Rulon R.; Waterman, James; Schuler, Jonathon; Scribner, Dean
2003-12-01
To achieve enhanced target discrimination, prototype three- band long wave infrared (LWIR) focal plane arrays (FPA) for missile defense applications have recently been constructed. The cutoff wavelengths, widths, and spectral overlap of the bands are critical parameters for the multicolor sensor design. Previous calculations for sensor design did not account for target and clutter spectral features in determining the optimal band characteristics. The considerable spectral overlap and correlation between the bands and attendant reduction in color contrast is another unexamined issue. To optimize and simulate the projected behavior of three-band sensors, this report examined a hyperspectral LWIR image cube. Our study starts with 30 bands of the LWIR spectra of three man-made targets and natural backgrounds that were binned to 3 bands using weighted band binning. This work achieves optimal binning by using a genetic algorithm approach and the target-to-clutter-ratio (TCR) as the optimization criterion. Another approach applies a genetic algorithm to maximize discrimination among the spectral reflectivities in the Non-conventional Exploitation Factors Data System (NEFDS) library. Each candidate band was weighted using a Fermi function to represent four interacting band edges for three- bands. It is found that choice of target can significantly influence the optimal choice of bands as expressed through the TCR and the Receiver Operator Characteristic curve. This study shows that whitening the image data prominently displays targets relative to backgrounds by increasing color contrast and also maintains color constancy. Three-color images are displayed by assigning red, green, blue colors directly to the whitened data set. Achieving constant colors of targets and backgrounds over time can greatly aid human viewers in the interpretation of the images and discriminate targets.
NASA Astrophysics Data System (ADS)
Mayer, Rulon R.; Waterman, James; Schuler, Jonathon; Scribner, Dean
2004-01-01
To achieve enhanced target discrimination, prototype three- band long wave infrared (LWIR) focal plane arrays (FPA) for missile defense applications have recently been constructed. The cutoff wavelengths, widths, and spectral overlap of the bands are critical parameters for the multicolor sensor design. Previous calculations for sensor design did not account for target and clutter spectral features in determining the optimal band characteristics. The considerable spectral overlap and correlation between the bands and attendant reduction in color contrast is another unexamined issue. To optimize and simulate the projected behavior of three-band sensors, this report examined a hyperspectral LWIR image cube. Our study starts with 30 bands of the LWIR spectra of three man-made targets and natural backgrounds that were binned to 3 bands using weighted band binning. This work achieves optimal binning by using a genetic algorithm approach and the target-to-clutter-ratio (TCR) as the optimization criterion. Another approach applies a genetic algorithm to maximize discrimination among the spectral reflectivities in the Non-conventional Exploitation Factors Data System (NEFDS) library. Each candidate band was weighted using a Fermi function to represent four interacting band edges for three- bands. It is found that choice of target can significantly influence the optimal choice of bands as expressed through the TCR and the Receiver Operator Characteristic curve. This study shows that whitening the image data prominently displays targets relative to backgrounds by increasing color contrast and also maintains color constancy. Three-color images are displayed by assigning red, green, blue colors directly to the whitened data set. Achieving constant colors of targets and backgrounds over time can greatly aid human viewers in the interpretation of the images and discriminate targets.
NASA Astrophysics Data System (ADS)
Chu, Jou-Mei
The Fleet Level Environmental Evaluation Tool (FLEET) can assess environmental impacts of various levels of technology and environmental policies on fleet-level carbon emissions and airline operations. FLEET consists of different models to mimic airlines' behaviors and a resource allocation problem to simulate airlines' aircraft deployments on their networks. Additionally, the Multiactors Biofuel Model can conduct biofuel life-cycle assessments and evaluate biofuel developments and assess the effects of new technology on biofuel production costs and unit carbon emissions as well. In addition, the European Union (EU) initiated an Emission Trading Scheme (ETS) in the European Economic Area, while International Civil Aviation Organization (ICAO) is designing a Global Market-Based Measure (GMBM) scheme to limit civil aviation fleet-level carbon emissions after 2021. This work integrates the FLEET and the Multiactors Biofuel Model together to investigate the interactions between airline operations, biofuel production chains, and environmental policies. The interfaces between the two models are bio-refinery firm profit maximization problem and farmers' profits maximization problem. The two maximization problems mimic the bio-refinery firms and farmers behaviors based on environmental policies, airlines performances, and biofuel developments. In the current study, limited impacts of biofuels on fleet-level emissions due to the inconsistency between biofuel demand and feedstock resource distributions and feedstock supplies were observed. Furthermore, the main driving factor for biofuel developments besides newer technologies was distinguished. Conventional jet fuel prices have complex impacts on biofuel developments because conventional jet fuel prices increase biofuel prices and decrease potential biofuel demands at the same time. In the end, with simplified EU ETS and ICAO GMBM models, the integrated tool represents that EU ETS model conducts lower emissions in a short
1980-03-01
Deployer’s Cheating Strategy 8 3.3 Characteristics 9 3.3.1 Legal Distribution 9 3.3.2 MCPD Distribution ll 4 Cooper’s Sample and Search 15 4.1...MISSILES 1 2 3 4 0 Illegal Missiles 120 Illegal Missiles Number of Expected Expected Missiles Number Number MCPD * In One Set Of Sets Of Sets Declaration...0 71 37 52 1 76 67 96 2 38 54 52 3 12 28 0 4 3 10 0 Total 200 196 (-200) 200 * MCPD = Minimum Conon Probability of Detection if variations from this
Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
An S/H circuit with parasitics optimized for IF-sampling
NASA Astrophysics Data System (ADS)
Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue
2016-06-01
An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).
Easton, D.F.; Goldgar, D.E.
1994-09-01
As genes underlying susceptibility to human disease are identified through linkage analysis, it is becoming increasingly clear that genetic heterogeneity is the rule rather than the exception. The focus of the present work is to examine the power and optimal sampling design for localizing a second disease gene when one disease gene has previously been identified. In particular, we examined the case when the unknown locus had lower penetrance, but higher frequency, than the known locus. Three scenarios regarding knowledge about locus 1 were examined: no linkage information (i.e. standard heterogeneity analysis), tight linkage with a known highly polymorphic marker locus, and mutation testing. Exact expected LOD scores (ELODs) were calculated for a number of two-locus genetic models under the 3 scenarios of heterogeneity for nuclear families containing 2, 3 or 4 affected children, with 0 or 1 affected parents. A cost function based upon the cost of ascertaining and genotyping sufficient samples to achieve an ELOD of 3.0 was used to evaluate the designs. As expected, the power and the optimal pedigree sampling strategy was dependent on the underlying model and the heterogeneity testing status. When the known locus had higher penetrance than the unknown locus, three affected siblings with unaffected parents proved to be optimal for all levels of heterogeneity. In general, mutation testing at the first locus provided substantially more power for detecting the second locus than linkage evidence alone. However, when both loci had relatively low penetrance, mutation testing provided little improvement in power since most families could be expected to be segregating the high risk allele at both loci.
Optimal selection of gene and ingroup taxon sampling for resolving phylogenetic relationships.
Townsend, Jeffrey P; Lopez-Giraldez, Francesc
2010-07-01
A controversial topic that underlies much of phylogenetic experimental design is the relative utility of increased taxonomic versus character sampling. Conclusions about the relative utility of adding characters or taxa to a current phylogenetic study have subtly hinged upon the appropriateness of the rate of evolution of the characters added for resolution of the phylogeny in question. Clearly, the addition of characters evolving at optimal rates will have much greater impact upon accurate phylogenetic analysis than will the addition of characters with an inappropriate rate of evolution. Development of practical analytical predictions of the asymptotic impact of adding additional taxa would complement computational investigations of the relative utility of these two methods of expanding acquired data. Accordingly, we here formulate a measure of the phylogenetic informativeness of the additional sampling of character states from a new taxon added to the canonical phylogenetic quartet. We derive the optimal rate of evolution for characters assessed in taxa to be sampled and a metric of informativeness based on the rate of evolution of the characters assessed in the new taxon and the distance of the new taxon from the internode of interest. Calculation of the informativeness per base pair of additional character sampling for included taxa versus additional character sampling for novel taxa can be used to estimate cost-effectiveness and optimal efficiency of phylogenetic experimental design. The approach requires estimation of rates of evolution of individual sites based on an alignment of genes orthologous to those to be sequenced, which may be identified in a well-established clade of sister taxa or of related taxa diverging at a deeper phylogenetic scale. Some approximate idea of the potential phylogenetic relationships of taxa to be sequenced is also desirable, such as may be obtained from ribosomal RNA sequence alone. Application to the solution of recalcitrant
Forrest, A; Ballow, C H; Nix, D E; Birmingham, M C; Schentag, J J
1993-05-01
Data obtained from 74 acutely ill patients treated in two clinical efficacy trials were used to develop a population model of the pharmacokinetics of intravenous (i.v.) ciprofloxacin. Dosage regimens ranged between 200 mg every 12 h and 400 mg every 8 h. Plasma samples (2 to 19 per patient; mean +/- standard deviation = 7 +/- 5) were obtained and assayed (by high-performance liquid chromatography) for ciprofloxacin. These data and patient covariates were modelled by iterative two-stage analysis, an approach which generates pharmacokinetic parameter values for both the population and each individual patient. The final model was used to implement a maximum a posteriori-Bayesian pharmacokinetic parameter value estimator. Optimal sampling theory was used to determine the best (maximally informative) two-, three-, four-, five-, and six-sample study designs (e.g., optimal sampling strategy 2 [OSS2] was the two-sample strategy) for identifying a patient's pharmacokinetic parameter values. These OSSs and the population model were evaluated by selecting the relatively rich data sets, those with 7 to 10 samples obtained in a single dose interval (n = 29), and comparing the parameter estimates (obtained by the maximum a posteriori-Bayesian estimator) based on each of the OSSs with those obtained by fitting all of the available data from each patient. Distributional clearance and apparent volumes were significantly related to body size (e.g., weight in kilograms or body surface area in meters squared); plasma clearance (CLT in liters per hour) was related to body size and renal function (creatinine clearance [CLCR] in milliliters per minute per 1.73 m2) by the equation CLT = (0.00145.CLCR + 0.167).weight. However, only 30% of the variance in CLT was explained by this relationship, and no other patient covariates were significant. Compared with previously published data, this target population had smaller distribution volumes (by 30%; P < 0.01) and CLT (by 44%; P < 0.001) than
NASA Astrophysics Data System (ADS)
Niwa, Yosuke; Fujii, Yosuke; Sawa, Yousuke; Iida, Yosuke; Ito, Akihiko; Satoh, Masaki; Imasu, Ryoichi; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Saigusa, Nobuko
2017-06-01
A four-dimensional variational method (4D-Var) is a popular technique for source/sink inversions of atmospheric constituents, but it is not without problems. Using an icosahedral grid transport model and the 4D-Var method, a new atmospheric greenhouse gas (GHG) inversion system has been developed. The system combines offline forward and adjoint models with a quasi-Newton optimization scheme. The new approach is then used to conduct identical twin experiments to investigate optimal system settings for an atmospheric CO2 inversion problem, and to demonstrate the validity of the new inversion system. In this paper, the inversion problem is simplified by assuming the prior flux errors to be reasonably well known and by designing the prior error correlations with a simple function as a first step. It is found that a system of forward and adjoint models with smaller model errors but with nonlinearity has comparable optimization performance to that of another system that conserves linearity with an exact adjoint relationship. Furthermore, the effectiveness of the prior error correlations is demonstrated, as the global error is reduced by about 15 % by adding prior error correlations that are simply designed when 65 weekly flask sampling observations at ground-based stations are used. With the optimal setting, the new inversion system successfully reproduces the spatiotemporal variations of the surface fluxes, from regional (such as biomass burning) to global scales. The optimization algorithm introduced in the new system does not require decomposition of a matrix that establishes the correlation among the prior flux errors. This enables us to design the prior error covariance matrix more freely.
NASA Astrophysics Data System (ADS)
Santaren, D.; Peylin, P.; Bacour, C.; Ciais, P.; Longdoz, B.
2014-12-01
Terrestrial ecosystem models can provide major insights into the responses of Earth's ecosystems to environmental changes and rising levels of atmospheric CO2. To achieve this goal, biosphere models need mechanistic formulations of the processes that drive the ecosystem functioning from diurnal to decadal timescales. However, the subsequent complexity of model equations is associated with unknown or poorly calibrated parameters that limit the accuracy of long-term simulations of carbon or water fluxes and their interannual variations. In this study, we develop a data assimilation framework to constrain the parameters of a mechanistic land surface model (ORCHIDEE) with eddy-covariance observations of CO2 and latent heat fluxes made during the years 2001-2004 at the temperate beech forest site of Hesse, in eastern France. As a first technical issue, we show that for a complex process-based model such as ORCHIDEE with many (28) parameters to be retrieved, a Monte Carlo approach (genetic algorithm, GA) provides more reliable optimal parameter values than a gradient-based minimization algorithm (variational scheme). The GA allows the global minimum to be found more efficiently, whilst the variational scheme often provides values relative to local minima. The ORCHIDEE model is then optimized for each year, and for the whole 2001-2004 period. We first find that a reduced (<10) set of parameters can be tightly constrained by the eddy-covariance observations, with a typical error reduction of 90%. We then show that including contrasted weather regimes (dry in 2003 and wet in 2002) is necessary to optimize a few specific parameters (like the temperature dependence of the photosynthetic activity). Furthermore, we find that parameters inverted from 4 years of flux measurements are successful at enhancing the model fit to the data on several timescales (from monthly to interannual), resulting in a typical modeling efficiency of 92% over the 2001-2004 period (Nash
Joshi, M D; O'Donnell, J N; Venkatesan, N; Chang, J; Nguyen, H; Rhodes, N J; Pais, G; Chapman, R L; Griffin, B; Scheetz, M H
2017-07-04
A translational need exists to understand and predict vancomycin-induced kidney toxicity. We describe: (i) a vancomycin high-performance liquid chromatography (HPLC) method for rat plasma and kidney tissue homogenate; (ii) a rat pharmacokinetic (PK) study to demonstrate utility; and (iii) a catheter retention study to enable future preclinical studies. Rat plasma and pup kidney tissue homogenate were analyzed via HPLC for vancomycin concentrations ranging from 3-75 and 15.1-75.5 μg/mL, respectively, using a Kinetex Biphenyl column and gradient elution of water with 0.1% formic acid: acetonitrile (70:30 v/v). Sprague-Dawley rats (n = 10) receiving 150 mg/kg of vancomycin intraperitoneally had plasma sampled for PK. Finally, a catheter retention study was performed on polyurethane catheters to assess adsorption. Precision was <6.1% for all intra-assay and interassay HPLC measurements, with >96.3% analyte recovery. A two-compartment model fit the data well, facilitating PK exposure estimates. Finally, vancomycin was heterogeneously retained by polyurethane catheters. © 2017 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American So