Automatic optimization of metrology sampling scheme for advanced process control
NASA Astrophysics Data System (ADS)
Chue, Chuei-Fu; Huang, Chun-Yen; Shih, Chiang-Lin
2011-03-01
In order to ensure long-term profitability, driving the operational costs down and improving the yield of a DRAM manufacturing process are continuous efforts. This includes optimal utilization of the capital equipment. The costs of metrology needed to ensure yield are contributing to the overall costs. As the shrinking of device dimensions continues, the costs of metrology are increasing because of the associated tightening of the on-product specifications requiring more metrology effort. The cost-of-ownership reduction is tackled by increasing the throughput and availability of metrology systems. However, this is not the only way to reduce metrology effort. In this paper, we discuss how the costs of metrology can be improved by optimizing the recipes in terms of the sampling layout, thereby eliminating metrology that does not contribute to yield. We discuss results of sampling scheme optimization for on-product overlay control of two DRAM manufacturing processes at Nanya Technology Corporation. For a 6x DRAM production process, we show that the reduction of metrology waste can be as high as 27% and overlay can be improved by 36%, comparing with a baseline sampling scheme. For a 4x DRAM process, having tighter overlay specs, a gain of ca. 0.5nm on-product overlay could be achieved, without increasing the metrology effort relative to the original sampling plan.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074
Yin, Jingjing; Samawi, Hani; Linder, Daniel
2016-07-01
A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Accelerated failure time model under general biased sampling scheme.
Kim, Jane Paik; Sit, Tony; Ying, Zhiliang
2016-07-01
Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. PMID:26941240
Evolutionary Algorithm for Optimal Vaccination Scheme
NASA Astrophysics Data System (ADS)
Parousis-Orthodoxou, K. J.; Vlachos, D. S.
2014-03-01
The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.
Max, N. |
1992-12-17
Radiosity algorithms for global illumination, either ``gathering`` or ``shooting`` versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.
Max, N. California Univ., Davis, CA )
1992-12-17
Radiosity algorithms for global illumination, either gathering'' or shooting'' versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Optimized entanglement purification schemes for modular based quantum computers
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
The choice of entanglement purification scheme strongly depends on the fidelities of quantum gates and measurements, as well as the imperfection of initial entanglement. For instance, the purification scheme optimal at low gate fidelities may not necessarily be the optimal scheme at higher gate fidelities. We employ an evolutionary algorithm that efficiently optimizes the entanglement purification circuit for given system parameters. Such optimized purification schemes will boost the performance of entanglement purification, and consequently enhance the fidelity of teleportation-based non-local coupling gates, which is an indispensible building block for modular-based quantum computers. In addition, we study how these optimized purification schemes affect the resource overhead caused by error correction in modular based quantum computers.
An optimized spectral difference scheme for CAA problems
NASA Astrophysics Data System (ADS)
Gao, Junhui; Yang, Zhigang; Li, Xiaodong
2012-05-01
In the implementation of spectral difference (SD) method, the conserved variables at the flux points are calculated from the solution points using extrapolation or interpolation schemes. The errors incurred in using extrapolation and interpolation would result in instability. On the other hand, the difference between the left and right conserved variables at the edge interface will introduce dissipation to the SD method when applying a Riemann solver to compute the flux at the element interface. In this paper, an optimization of the extrapolation and interpolation schemes for the fourth order SD method on quadrilateral element is carried out in the wavenumber space through minimizing their dispersion error over a selected band of wavenumbers. The optimized coefficients of the extrapolation and interpolation are presented. And the dispersion error of the original and optimized schemes is plotted and compared. An improvement of the dispersion error over the resolvable wavenumber range of SD method is obtained. The stability of the optimized fourth order SD scheme is analyzed. It is found that the stability of the 4th order scheme with Chebyshev-Gauss-Lobatto flux points, which is originally weakly unstable, has been improved through the optimization. The weak instability is eliminated completely if an additional second order filter is applied on selected flux points. One and two dimensional linear wave propagation analyses are carried out for the optimized scheme. It is found that in the resolvable wavenumber range the new SD scheme is less dispersive and less dissipative than the original scheme, and the new scheme is less anisotropic for 2D wave propagation. The optimized SD solver is validated with four computational aeroacoustics (CAA) workshop benchmark problems. The numerical results with optimized schemes agree much better with the analytical data than those with the original schemes.
Optimal Symmetric Ternary Quantum Encryption Schemes
NASA Astrophysics Data System (ADS)
Wang, Yu-qi; She, Kun; Huang, Ru-fen; Ouyang, Zhong
2016-07-01
In this paper, we present two definitions of the orthogonality and orthogonal rate of an encryption operator, and we provide a verification process for the former. Then, four improved ternary quantum encryption schemes are constructed. Compared with Scheme 1 (see Section 2.3), these four schemes demonstrate significant improvements in term of calculation and execution efficiency. Especially, under the premise of the orthogonal rate ɛ as secure parameter, Scheme 3 (see Section 4.1) shows the highest level of security among them. Through custom interpolation functions, the ternary secret key source, which is composed of the digits 0, 1 and 2, is constructed. Finally, we discuss the security of both the ternary encryption operator and the secret key source, and both of them show a high level of security and high performance in execution efficiency.
An optimized finite-difference scheme for wave propagation problems
NASA Technical Reports Server (NTRS)
Zingg, D. W.; Lomax, H.; Jurgens, H.
1993-01-01
Two fully-discrete finite-difference schemes for wave propagation problems are presented, a maximum-order scheme and an optimized (or spectral-like) scheme. Both combine a seven-point spatial operator and an explicit six-stage time-march method. The maximum-order operator is fifth-order in space and is sixth-order in time for a linear problem with periodic boundary conditions. The phase and amplitude errors of the schemes obtained using Fourier analysis are given and compared with a second-order and a fourth-order method. Numerical experiments are presented which demonstrate the usefulness of the schemes for a range of problems. For some problems, the optimized scheme leads to a reduction in global error compared to the maximum-order scheme with no additional computational expense.
Selecting optimal partitioning schemes for phylogenomic datasets
2014-01-01
Background Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. Methods We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. Results We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. Conclusions These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets. PMID:24742000
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Optimized perturbation theory applied to factorization scheme dependence
NASA Astrophysics Data System (ADS)
Stevenson, P. M.; Politzer, H. David
We reconsider the application of the "optimization" procedure to the problem of factorization scheme dependence in finite-order QCD calculations. The main difficulty encountered in a previous analysis disappears once an algebraic error is corrected.
Optimized Multilocus Sequence Typing (MLST) Scheme for Trypanosoma cruzi
Diosque, Patricio; Tomasini, Nicolás; Lauthier, Juan José; Messenger, Louisa Alexandra; Monje Rumi, María Mercedes; Ragone, Paula Gabriela; Alberti-D'Amato, Anahí Maitén; Pérez Brandán, Cecilia; Barnabé, Christian; Tibayrenc, Michel; Lewis, Michael David; Llewellyn, Martin Stephen; Miles, Michael Alexander; Yeo, Matthew
2014-01-01
Trypanosoma cruzi, the aetiological agent of Chagas disease possess extensive genetic diversity. This has led to the development of a plethora of molecular typing methods for the identification of both the known major genetic lineages and for more fine scale characterization of different multilocus genotypes within these major lineages. Whole genome sequencing applied to large sample sizes is not currently viable and multilocus enzyme electrophoresis, the previous gold standard for T. cruzi typing, is laborious and time consuming. In the present work, we present an optimized Multilocus Sequence Typing (MLST) scheme, based on the combined analysis of two recently proposed MLST approaches. Here, thirteen concatenated gene fragments were applied to a panel of T. cruzi reference strains encompassing all known genetic lineages. Concatenation of 13 fragments allowed assignment of all strains to the predicted Discrete Typing Units (DTUs), or near-clades, with the exception of one strain that was an outlier for TcV, due to apparent loss of heterozygosity in one fragment. Monophyly for all DTUs, along with robust bootstrap support, was restored when this fragment was subsequently excluded from the analysis. All possible combinations of loci were assessed against predefined criteria with the objective of selecting the most appropriate combination of between two and twelve fragments, for an optimized MLST scheme. The optimum combination consisted of 7 loci and discriminated between all reference strains in the panel, with the majority supported by robust bootstrap values. Additionally, a reduced panel of just 4 gene fragments displayed high bootstrap values for DTU assignment and discriminated 21 out of 25 genotypes. We propose that the seven-fragment MLST scheme could be used as a gold standard for T. cruzi typing, against which other typing approaches, particularly single locus approaches or systematic PCR assays based on amplicon size, could be compared. PMID:25167160
Optimized Handover Schemes over WiMAX
NASA Astrophysics Data System (ADS)
Jerjees, Zina; Al-Raweshidy, H. S.; Al-Banna, Zaineb
Voice Over Internet Protocol (VoIP) applications have received significant interests from the Mobile WiMAX standard in terms of capabilities and means of delivery multimedia services, by providing high bandwidth over long-range transmission. However, one of the main problems of IEEE 802.16 is that it covers multi BS with too many profiled layers, which can lead to potential interoperability problems. The multi BS mode requires multiple BSs to be scanned synchronously before initiating the transmission of broadcast data. In this paper, we first identify the key issues for VoIP over WiMAX. Then we present a MAC Layer solution to guarantee the demanded bandwidth and supporting a higher possible throughput between two WiMAX end points during the handover. Moreover, we propose a PHY and MAC layers scheme to maintain the required communication channel quality for VoIP during handover. Results show that our proposed schemes can significantly improve the network throughput up to 55%, reducing the data dropped to 70% while satisfying VoIP quality requirements.
Optimizations on Designing High-Resolution Finite-Difference Schemes
NASA Technical Reports Server (NTRS)
Liu, Yen; Koomullil, George; Kwak, Dochan (Technical Monitor)
1994-01-01
We describe a general optimization procedure for both maximizing the resolution characteristics of existing finite differencing schemes as well as designing finite difference schemes that will meet the error tolerance requirements of numerical solutions. The procedure is based on an optimization process. This is a generalization of the compact scheme introduced by Lele in which the resolution is improved for single, one-dimensional spatial derivative, whereas in the present approach the complete scheme, after spatial and temporal discretizations, is optimized on a range of parameters of the scheme and the governing equations. The approach is to linearize and Fourier analyze the discretized equations to check the resolving power of the scheme for various wave number ranges in the solution and optimize the resolution to satisfy the requirements of the problem. This represents a constrained nonlinear optimization problem which can be solved to obtain the nodal weights of discretization. An objective function is defined in the parametric space of wave numbers, Courant number, Mach number and other quantities of interest. Typical criterion for defining the objective function include the maximization of the resolution of high wave numbers for acoustic and electromagnetic wave propagations and turbulence calculations. The procedure is being tested on off-design conditions of non-uniform mesh, non-periodic boundary conditions, and non-constant wave speeds for scalar and system of equations. This includes the solution of wave equations and Euler equations using a conventional scheme with and without optimization and the design of an optimum scheme for the specified error tolerance.
XFEM schemes for level set based structural optimization
NASA Astrophysics Data System (ADS)
Li, Li; Wang, Michael Yu; Wei, Peng
2012-12-01
In this paper, some elegant extended finite element method (XFEM) schemes for level set method structural optimization are proposed. Firstly, two-dimension (2D) and three-dimension (3D) XFEM schemes with partition integral method are developed and numerical examples are employed to evaluate their accuracy, which indicate that an accurate analysis result can be obtained on the structural boundary. Furthermore, the methods for improving the computational accuracy and efficiency of XFEM are studied, which include the XFEM integral scheme without quadrature sub-cells and higher order element XFEM scheme. Numerical examples show that the XFEM scheme without quadrature sub-cells can yield similar accuracy of structural analysis while prominently reducing the time cost and that higher order XFEM elements can improve the computational accuracy of structural analysis in the boundary elements, but the time cost is increasing. Therefore, the balance of time cost between FE system scale and the order of element needs to be discussed. Finally, the reliability and advantages of the proposed XFEM schemes are illustrated with several 2D and 3D mean compliance minimization examples that are widely used in the recent literature of structural topology optimization. All numerical results demonstrate that the proposed XFEM is a promising structural analysis approach for structural optimization with the level set method.
An efficient scheme of IDCT optimization in H.264 decoding
NASA Astrophysics Data System (ADS)
Bao, Guoxing
2011-02-01
This paper proposes an efficient scheme of IDCT in H.264 decoder. Firstly, motion compensation residuals of macro-block get from the bit-stream are classified into four cases: only dc coefficient is non-zero, only first row coefficients are non-zero, only first column coefficients are non-zero and others, and it is obvious that inverse transform processing of previous cases can be optimized, so in the second, we use different processing of IDCT in different cases to reduce their complexity. Compared with traditional IDCT scheme, the proposed scheme achieves an average 51.8% reduction in computation complexity without degradation in visual quality.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Effects of sparse sampling schemes on image quality in low-dose CT
Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena
2013-11-15
Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic
Global search acceleration in the nested optimization scheme
NASA Astrophysics Data System (ADS)
Grishagin, Vladimir A.; Israfilov, Ruslan A.
2016-06-01
Multidimensional unconstrained global optimization problem with objective function under Lipschitz condition is considered. For solving this problem the dimensionality reduction approach on the base of the nested optimization scheme is used. This scheme reduces initial multidimensional problem to a family of one-dimensional subproblems being Lipschitzian as well and thus allows applying univariate methods for the execution of multidimensional optimization. For two well-known one-dimensional methods of Lipschitz optimization the modifications providing the acceleration of the search process in the situation when the objective function is continuously differentiable in a vicinity of the global minimum are considered and compared. Results of computational experiments on conventional test class of multiextremal functions confirm efficiency of the modified methods.
Attributes mode sampling schemes for international material accountancy verification
Sanborn, J.B.
1982-12-01
This paper addresses the question of detecting falsifications in material balance accountancy reporting by comparing independently measured values to the declared values of a randomly selected sample of items in the material balance. A two-level strategy is considered, consisting of a relatively large number of measurements made at low accuracy, and a smaller number of measurements made at high accuracy. Sampling schemes for both types of measurements are derived, and rigorous proofs supplied that guarantee desired detection probabilities. Sample sizes derived using these methods are sometimes considerably smaller than those calculated previously.
Rotation Matrix Sampling Scheme for Multidimensional Probability Distribution Transfer
NASA Astrophysics Data System (ADS)
Srestasathiern, P.; Lawawirojwong, S.; Suwantong, R.; Phuthong, P.
2016-06-01
This paper address the problem of rotation matrix sampling used for multidimensional probability distribution transfer. The distribution transfer has many applications in remote sensing and image processing such as color adjustment for image mosaicing, image classification, and change detection. The sampling begins with generating a set of random orthogonal matrix samples by Householder transformation technique. The advantage of using the Householder transformation for generating the set of orthogonal matrices is the uniform distribution of the orthogonal matrix samples. The obtained orthogonal matrices are then converted to proper rotation matrices. The performance of using the proposed rotation matrix sampling scheme was tested against the uniform rotation angle sampling. The applications of the proposed method were also demonstrated using two applications i.e., image to image probability distribution transfer and data Gaussianization.
A geometric representation scheme suitable for shape optimization
NASA Technical Reports Server (NTRS)
Tortorelli, Daniel A.
1990-01-01
A geometric representation scheme is outlined which utilizes the natural design variable concept. A base configuration with distinct topological features is created. This configuration is then deformed to define components with similar topology but different geometry. The values of the deforming loads are the geometric entities used in the shape representation. The representation can be used for all geometric design studies; it is demonstrated here for structural optimization. This technique can be used in parametric design studies, where the system response is defined as functions of geometric entities. It can also be used in shape optimization, where the geometric entities of an original design are modified to maximize performance and satisfy constraints. Two example problems are provided. A cantilever beam is elongated to meet new design specifications and then optimized to reduce volume and satisfy stress constraints. A similar optimization problem is presented for an automobile crankshaft section. The finite element method is used to perform the analyses.
Optimization algorithm based characterization scheme for tunable semiconductor lasers.
Chen, Quanan; Liu, Gonghai; Lu, Qiaoyin; Guo, Weihua
2016-09-01
In this paper, an optimization algorithm based characterization scheme for tunable semiconductor lasers is proposed and demonstrated. In the process of optimization, the ratio between the power of the desired frequency and the power except of the desired frequency is used as the figure of merit, which approximately represents the side-mode suppression ratio. In practice, we use tunable optical band-pass and band-stop filters to obtain the power of the desired frequency and the power except of the desired frequency separately. With the assistance of optimization algorithms, such as the particle swarm optimization (PSO) algorithm, we can get stable operation conditions for tunable lasers at designated frequencies directly and efficiently. PMID:27607701
Optimizing passive acoustic sampling of bats in forests.
Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K
2014-12-01
Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km(2) scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363
Towards optimal sampling schedules for integral pumping tests
NASA Astrophysics Data System (ADS)
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
A new configurational bias scheme for sampling supramolecular structures
NASA Astrophysics Data System (ADS)
De Gernier, Robin; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.; Mognetti, Bortolo M.
2014-12-01
We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand-receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.
A new configurational bias scheme for sampling supramolecular structures
De Gernier, Robin; Mognetti, Bortolo M.; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.
2014-12-28
We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.
A new configurational bias scheme for sampling supramolecular structures.
De Gernier, Robin; Curk, Tine; Dubacheva, Galina V; Richter, Ralf P; Mognetti, Bortolo M
2014-12-28
We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand-receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking. PMID:25554182
NOTE: Sampling and reconstruction schemes for biomagnetic sensor arrays
NASA Astrophysics Data System (ADS)
Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio
2002-09-01
In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.
An optimal performance control scheme for a 3D crane
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Husain, A. R.; Tokhi, M. O.
2016-01-01
This paper presents an optimal performance control scheme for control of a three dimensional (3D) crane system including a Zero Vibration shaper which considers two control objectives concurrently. The control objectives are fast and accurate positioning of a trolley and minimum sway of a payload. A complete mathematical model of a lab-scaled 3D crane is simulated in Simulink. With a specific cost function the proposed controller is designed to cater both control objectives similar to a skilled operator. Simulation and experimental studies on a 3D crane show that the proposed controller has better performance as compared to a sequentially tuned PID-PID anti swing controller. The controller provides better position response with satisfactory payload sway in both rail and trolley responses. Experiments with different payloads and cable lengths show that the proposed controller is robust to changes in payload with satisfactory responses.
Sampling Schemes and the Selection of Log-Linear Models for Longitudinal Data.
ERIC Educational Resources Information Center
von Eye, Alexander; Schuster, Christof; Kreppner, Kurt
2001-01-01
Discusses the effects of sampling scheme selection on the admissibility of log-linear models for multinomial and product multinomial sampling schemes for prospective and retrospective sampling. Notes that in multinomial sampling, marginal frequencies are not fixed, whereas for product multinomial sampling, uni- or multidimensional frequencies are…
NASA Astrophysics Data System (ADS)
Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.
2014-12-01
Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.
NASA Astrophysics Data System (ADS)
Li, Y.; Han, B.; Métivier, L.; Brossier, R.
2016-09-01
We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Optimal design of a hybridization scheme with a fuel cell using genetic optimization
NASA Astrophysics Data System (ADS)
Rodriguez, Marco A.
Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated
Stevenson's optimized perturbation theory applied to factorization and mass scheme dependence
NASA Astrophysics Data System (ADS)
David Politzer, H.
1982-01-01
The principles of the optimized perturbation theory proposed by Stevenson to deal with coupling constant scheme dependence are applied to the problem of factorization scheme dependence in inclusive hadron reactions. Similar considerations allow the optimization of problems with mass dependence. A serious shortcoming of the procedure, common to all applications, is discussed.
Optimization schemes for the inversion of Bouguer gravity anomalies
NASA Astrophysics Data System (ADS)
Zamora, Azucena
associated with structural changes [16]; therefore, it complements those geophysical methods with the same depth resolution that sample a different physical property (e.g. electromagnetic surveys sampling electric conductivity) or even those with different depth resolution sampling an alternative physical property (e.g. large scale seismic reflection surveys imaging the crust and top upper mantle using seismic velocity fields). In order to improve the resolution of Bouguer gravity anomalies, and reduce their ambiguity and uncertainty for the modeling of the shallow crust, we propose the implementation of primal-dual interior point methods for the optimization of density structure models through the introduction of physical constraints for transitional areas obtained from previously acquired geophysical data sets. This dissertation presents in Chapter 2 an initial forward model implementation for the calculation of Bouguer gravity anomalies in the Porphyry Copper-Molybdenum (Cu-Mo) Copper Flat Mine region located in Sierra County, New Mexico. In Chapter 3, we present a constrained optimization framework (using interior-point methods) for the inversion of 2-D models of Earth structures delineating density contrasts of anomalous bodies in uniform regions and/or boundaries between layers in layered environments. We implement the proposed algorithm using three different synthetic gravitational data sets with varying complexity. Specifically, we improve the 2-dimensional density structure models by getting rid of unacceptable solutions (geologically unfeasible models or those not satisfying the required constraints) given the reduction of the solution space. Chapter 4 shows the results from the implementation of our algorithm for the inversion of gravitational data obtained from the area surrounding the Porphyry Cu-Mo Cooper Flat Mine in Sierra County, NM. Information obtained from previous induced polarization surveys and core samples served as physical constraints for the
Nonlinear Comparison of High-Order and Optimized Finite-Difference Schemes
NASA Technical Reports Server (NTRS)
Hixon, R.
1998-01-01
The effect of reducing the formal order of accuracy of a finite-difference scheme in order to optimize its high-frequency performance is investigated using the I-D nonlinear unsteady inviscid Burgers'equation. It is found that the benefits of optimization do carry over into nonlinear applications. Both explicit and compact schemes are compared to Tam and Webb's explicit 7-point Dispersion Relation Preserving scheme as well as a Spectral-like compact scheme derived following Lele's work. Results are given for the absolute and L2 errors as a function of time.
Discretization of the Gabor-type scheme by sampling of the Zak transform
NASA Astrophysics Data System (ADS)
Zibulski, Meir; Zeevi, Yehoshua Y.
1994-09-01
The matrix algebra approach was previously applied in the analysis of the continuous Gabor representation in the Zak transform domain. In this study we analyze the discrete and finite (periodic) scheme by the same approach. A direct relation that exists between the two schemes, based on the sampling of the Zak transform, is established. Specifically, we show that sampling of the Gabor expansion in the Zak transform domain yields a discrete scheme of representation. Such a derivation yields a simple relation between the schemes by means of the periodic extension of the signal. We show that in the discrete Zak domain the frame operator can be expressed by means of a matrix-valued function which is simply the sampled version of the matrix-valued function of the continuous scheme. This result establishes a direct relation between the frame properties of the two schemes.
NASA Astrophysics Data System (ADS)
Cunha, G.; Redonnet, S.
2014-04-01
The present article aims at highlighting the strengths and weaknesses of the so-called spectral-like optimized (explicit central) finite-difference schemes, when the latter are used for numerically approximating spatial derivatives in aeroacoustics evolution problems. With that view, we first remind how differential operators can be approximated using explicit central finite-difference schemes. The possible spectral-like optimization of the latter is then discussed, the advantages and drawbacks of such an optimization being theoretically studied, before they are numerically quantified. For doing so, two popular spectral-like optimized schemes are assessed via a direct comparison against their standard counterparts, such a comparative exercise being conducted for several academic test cases. At the end, general conclusions are drawn, which allows us discussing the way spectral-like optimized schemes shall be preferred (or not) to standard ones, when it comes to simulate real-life aeroacoustics problems.
Clever particle filters, sequential importance sampling and the optimal proposal
NASA Astrophysics Data System (ADS)
Snyder, Chris
2014-05-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.
Effect of control sampling rates on model-based manipulator control schemes
NASA Technical Reports Server (NTRS)
Khosla, P. K.
1987-01-01
The effect of changing the control sampling period on the performance of the computed-torque and independent joint control schemes is discussed. While the former utilizes the complete dynamics model of the manipulator, the latter assumes a decoupled and linear model of the manipulator dynamics. Researchers discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory tracking performance. Experiments show that within each scheme the trajectory tracking accuracy varies slightly with the change of the sampling rate. However, at low sampling rates the computed-torque scheme outperforms the independent joint control scheme. Based on experimental results, researchers also conclusively establish the importance of high sampling rates as they result in an increased stiffness of the system.
Rate-distortion optimization for compressive video sampling
NASA Astrophysics Data System (ADS)
Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee
2014-05-01
The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
An efficient scheme for sampling fast dynamics at a low average data acquisition rate
NASA Astrophysics Data System (ADS)
Philippe, A.; Aime, S.; Roger, V.; Jelinek, R.; Prévot, G.; Berthier, L.; Cipelletti, L.
2016-02-01
We introduce a temporal scheme for data sampling, based on a variable delay between two successive data acquisitions. The scheme is designed so as to reduce the average data flow rate, while still retaining the information on the data evolution on fast time scales. The practical implementation of the scheme is discussed and demonstrated in light scattering and microscopy experiments that probe the dynamics of colloidal suspensions using CMOS or CCD cameras as detectors.
Resolution optimization with irregularly sampled Fourier data
NASA Astrophysics Data System (ADS)
Ferrara, Matthew; Parker, Jason T.; Cheney, Margaret
2013-05-01
Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer-Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications.
Resource optimization scheme for multimedia-enabled wireless mesh networks.
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young
2014-01-01
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241
Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young
2014-01-01
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241
Sampling scheme for pyrethroids on multiple surfaces on commercial aircrafts.
Mohan, Krishnan R; Weisel, Clifford P
2010-06-01
A wipe sampler for the collection of permethrin from soft and hard surfaces has been developed for use in aircraft. "Disinsection" or application of pesticides, predominantly pyrethrods, inside commercial aircraft is routinely required by some countries and is done on an as-needed basis by airlines resulting in potential pesticide dermal and inhalation exposures to the crew and passengers. A wipe method using filter paper and water was evaluated for both soft and hard aircraft surfaces. Permethrin was analyzed by GC/MS after its ultrasonication extraction from the sampling medium into hexane and volume reduction. Recoveries, based on spraying known levels of permethrin, were 80-100% from table trays, seat handles and rugs; and 40-50% from seat cushions. The wipe sampler is easy to use, requires minimum training, is compatible with the regulations on what can be brought through security for use on commercial aircraft, and readily adaptable for use in residential and other settings. PMID:19756041
An Optimized Handover Scheme with Movement Trend Awareness for Body Sensor Networks
Sun, Wen; Zhang, Zhiqiang; Ji, Lianying; Wong, Wai-Choong
2013-01-01
When a body sensor network (BSN) that is linked to the backbone via a wireless network interface moves from one coverage zone to another, a handover is required to maintain network connectivity. This paper presents an optimized handover scheme with movement trend awareness for BSNs. The proposed scheme predicts the future position of a BSN user using the movement trend extracted from the historical position, and adjusts the handover decision accordingly. Handover initiation time is optimized when the unnecessary handover rate is estimated to meet the requirement and the outage probability is minimized. The proposed handover scheme is simulated in a BSN deployment area in a hospital environment in UK. Simulation results show that the proposed scheme reduces the outage probability by 22% as compared with the existing hysteresis-based handover scheme under the constraint of acceptable handover rate. PMID:23736852
Efficient multiobjective optimization scheme for large scale structures
NASA Astrophysics Data System (ADS)
Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.
1992-09-01
This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Robinson, Y Harold; Rajaram, M
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
An optimally blended finite-spectral element scheme with minimal dispersion for Maxwell equations
NASA Astrophysics Data System (ADS)
Wajid, Hafiz Abdul; Ayub, Sobia
2012-10-01
We study the dispersive properties of the time harmonic Maxwell equations for optimally blended finite-spectral element scheme using tensor product elements defined on rectangular grid in d-dimensions. We prove and give analytical expressions for the discrete dispersion relations for this scheme. We find that for a rectangular grid (a) the analytical expressions for the discrete dispersion error in higher dimensions can be obtained using one dimensional discrete dispersion error expressions; (b) the optimum value of the blending parameter is p/(p+1) for all p∈N and for any number of spatial dimensions; (c) analytical expressions for the discrete dispersion relations for finite element and spectral element schemes can be obtained when the value of blending parameter is chosen to be 0 and 1 respectively; (d) the optimally blended scheme guarantees two additional orders of accuracy compared with standard finite element and spectral element schemes; and (e) the absolute accuracy of the optimally blended scheme is O(p-2) and O(p-1) times better than that of the pure finite element and spectral element schemes respectively.
Design of optimally smoothing multi-stage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Tai, Chang-Hsien; Powell, Kenneth G.
1989-01-01
In this paper, a method is developed for designing multi-stage schemes that give optimal damping of high-frequencies for a given spatial-differencing operator. The objective of the method is to design schemes that combine well with multi-grid acceleration. The schemes are tested on a nonlinear scalar equation, and compared to Runge-Kutta schemes with the maximum stable time-step. The optimally smoothing schemes perform better than the Runge-Kutta schemes, even on a single grid. The analysis is extended to the Euler equations in one space-dimension by use of 'characteristic time-stepping', which preconditions the equations, removing stiffness due to variations among characteristic speeds. Convergence rates independent of the number of cells in the finest grid are achieved for transonic flow with and without a shock. Characteristic time-stepping is shown to be preferable to local time-stepping, although use of the optimally damping schemes appears to enhance the performance of local time-stepping. The extension of the analysis to the two-dimensional Euler equations is hampered by the lack of a model for characteristic time-stepping in two dimensions. Some results for local time-stepping are presented.
POWER-COST EFFICIENCY OF EIGHT MACROBENTHIC SAMPLING SCHEMES IN PUGET SOUND, WASHINGTON, USA
Power-cost efficiency (PCEi = (n x c)min/(n, x, c,), where i = sampling scheme, n=minimum number of replicate samples needed to detect a difference between locations with an acceptable probability of Type 1 (a) and Type II (B) error (e.g. a = B = 0.05),c = mean "cost," in time or...
Zou Xubo; Mathis, W.
2005-08-15
We propose a scheme to realize the optimal universal quantum cloning of the polarization state of the photons in context of a microwave cavity quantum electrodynamics. The scheme is based on the resonant interaction of three-level {lambda}-type atoms with two cavity modes. The operation requires atoms to fly one by one through the cavity. The interaction time between each of the atoms and the cavity is appropriately controlled by using a velocity selector. The scheme is deterministic, and is feasible by the current experimental technology.
Optimizing the monitoring scheme for groundwater quality in the Lusatian mining region
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Hildmann, Christian; Haubold-Rosar, Michael
2014-05-01
Opencast lignite mining always requires the lowering of the groundwater table. In Lusatia, strong mining activities during the GDR era were associated with low groundwater levels in huge parts of the region. Pyrite (iron sulfide) oxidation in the aerated sediments is the cause for a continuous regional groundwater pollution with sulfates, acids, iron and other metals. The contaminated groundwater poses danger to surface water bodies and may also affect soil quality. Due to the decline of mining activities after the German reunification, groundwater levels have begun to recover towards the pre-mining stage, which aggravates the environmental risks. Given the relevance of the problem and the need for effective remediation measures, it is mandatory to know the temporal and spatial distribution of potential pollutants. The reliability of these space-time models, in turn, relies on a well-designed groundwater monitoring scheme. So far, the groundwater monitoring network in the Lusatian mining region represents a purposive sample in space and time with great variations in the density of monitoring wells. Moreover, groundwater quality in some of the areas that face pronounced increases in groundwater levels is currently not monitored at all. We therefore aim to optimize the monitoring network based on the existing information, taking into account practical aspects such as the land-use dependent need for remedial action. This contribution will discuss the usefulness of approaches for optimizing spatio-temporal mapping with regard to groundwater pollution by iron and aluminum in the Lusatian mining region.
Incongruity of the unified scheme with a 3CRR-like equatorial strong-source sample
NASA Astrophysics Data System (ADS)
Singal, Ashok K.; Singh, Raj Laxmi
2013-08-01
We examine the consistency of the unified scheme of the powerful extragalactic radio sources with the 408 MHz BRL sample (Best, Röttgering & Lehnert) from the equatorial sky region, selected at the same flux-density level as the 3CRR sample. We find that, unlike in the 3CRR sample, a foreshortening in the observed sizes of quasars, expected from the orientation-based unified scheme model, is not seen in the BRL sample, at least in different redshift bins up to z ˜ 1. Even the quasar fraction in individual redshift bins up to z ˜ 1 does not match with that expected from the unified scheme, where radio galaxies and quasars are supposed to belong to a common parent population at all redshifts. This not only casts strong doubts on the unified scheme, but also throws up an intriguing result that in a sample selected from the equatorial sky region, using almost the same criteria as in the 3CRR sample from the Northern hemisphere, the relative distribution of radio galaxies and quasars differs qualitatively from the 3CRR sample.
A numerical scheme for optimal transition paths of stochastic chemical kinetic systems
NASA Astrophysics Data System (ADS)
Liu, Di
2008-10-01
We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann
2013-04-01
In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and
Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy
Unkelbach, Jan; Zeng, Chuan; Engelsman, Martijn
2013-09-15
Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized
Choosing a cost functional and a difference scheme in the optimal control of metal solidification
NASA Astrophysics Data System (ADS)
Albu, A. V.; Zubov, V. I.
2011-01-01
The optimal control of solidification in metal casting is considered. The underlying mathematical model is based on a three-dimensional two-phase initial-boundary value problem of the Stefan type. The study is focused on choosing a cost functional in the optimal control of solidification and choosing a difference scheme for solving the direct problem. The results of the study are described and analyzed.
Bilateral teleoperation control with varying time delay using optimal passive scheme
NASA Astrophysics Data System (ADS)
Zhang, Changlei; Yoo, Sung Goo; Chong, Kil To
2007-12-01
This paper presents a passive control scheme for a force reflecting bilateral teleoperation system via the Internet. To improve the stability and performance of the system, the host and client must be coupled dynamically via the network and Internet technology provides a convenient way to develop an integrated teleoperation system. However, as use of Internet increases, congestion situation of network increased and transmission time and packet loss increased accordingly. This can make system unstable at remote control. In this paper, we present an optimal passive control scheme for a force reflecting bilateral teleoperation system via the Internet and we investigated how a varying time delay affects the stability of a teleoperation system. A new approach based on an optimal passive control scheme was designed for the system. The simulation results and the tracking performance of the implemented system are presented in this paper.
NASA Astrophysics Data System (ADS)
Okayama, Hideaki; Onawa, Yosuke; Shimura, Daisuke; Yaegashi, Hiroki; Sasaki, Hironori
2016-08-01
We describe a Bragg grating with a phase shift section and a sampled grating scheme that converts input polarization to orthogonal polarization. A very narrow polarization-independent wavelength peak can be generated by phase shift structures and polarization-independent multiple diffraction peaks by sampled gratings. The characteristics of the device were examined by transfer matrix and finite-difference time-domain methods.
NASA Astrophysics Data System (ADS)
Ren, Bin; Zhang, Shuyou; Tan, Jianrong
2014-07-01
The current development of precision plastic injection molding machines mainly focuses on how to save material and improve precision, but the two aims contradict each other. For a clamp unit, clamping precision improving depends on the design quality of the stationary platen. Compared with the parametric design of stationary platen, structural scheme design could obtain the optimization model with double objectives and multi-constraints. In this paper, a SE-160 precision plastic injection molding machine with 1600 kN clamping force is selected as the subject in the case study. During the motion of mold closing and opening, the stationary platen of SE-160 is subjected to a cyclic loading, which would cause the fatigue rupture of the tie bars in periodically long term operations. In order to reduce the deflection of the stationary platen, the FEA method is introduced to optimize the structure of the stationary platen. Firstly, an optimal topology model is established by variable density method. Then, structural topology optimizations of the stationary platen are done with the removable material from 50%, 60% to 70%. Secondly, the other two recommended optimization schemes are given and compared with the original structure. The result of performances comparison shows that the scheme II of the platen is the best one. By choosing the best alternative, the volume and the local maximal stress of the platen could be decreased, corresponding to cost-saving material and better mechanical properties. This paper proposes a structural optimization design scheme, which can save the material as well as improve the clamping precision of the precision plastic injection molding machine.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Mayhew, T M; Sharma, A K
1984-01-01
Using the tibial nerves of diabetic rats, alternative sampling schemes have been compared for estimating the sizes of fibres in nerve trunks of mixed fascicularity. The merits of each scheme were evaluated by comparing their reliability, precision, cost in time, and efficiency with 'absolute' values obtained by first measuring every fibre. The external diameter of all myelinated fibres was measured in each of six nerves (c. 2900 fibres/nerve). Total measurement time was about 29 hours. All sampling schemes produced group means within +/-4% of the absolute value of 5.52 micron. The most efficient schemes were those in which only 6% of all fibres were selected for measurement. For these the measurement time was 2 hours or less. Results are discussed in the general context of measurement of the sizes of nerve fibres. It is concluded that future studies should place more emphasis on sampling fewer fibres from more animals rather than on measuring all fibres very precisely. These considerations are likely to be of special concern to those wanting to analyse specimens with large fibre complements and those screening large numbers of specimens. PMID:6381443
Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.
1988-01-01
Implicit time integration schemes allow for the use of larger time steps than conventional explicit methods, thereby extending the applicability of kinetic particle simulation methods. This paper will describe a study of the performance and optimization of two such direct implicit schemes, which are used to follow the trajectories of charged particles in an electrostatic, particle-in-cell plasma simulation code. The direct implicit method that was used for this study is an alternative to the moment-equation implicit method. 10 refs., 7 figs., 4 tabs.
K-Optimal Gradient Encoding Scheme for Fourth-Order Tensor-Based Diffusion Profile Imaging
Alipoor, Mohammad; Gu, Irene Yu-Hua; Mehnert, Andrew; Maier, Stephan E.; Starck, Göran
2015-01-01
The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation. PMID:26451376
NASA Astrophysics Data System (ADS)
Zou, Rui; Liu, Yong; Riverson, John; Parker, Andrew; Carter, Stephen
2010-08-01
Applications using simulation-optimization approaches are often limited in practice because of the high computational cost associated with executing the simulation-optimization analysis. This research proposes a nonlinearity interval mapping scheme (NIMS) to overcome the computational barrier of applying the simulation-optimization approach for a waste load allocation analysis. Unlike the traditional response surface methods that use response surface functions to approximate the functional form of the original simulation model, the NIMS approach involves mapping the nonlinear input-output response relationship of a simulation model into an interval matrix, thereby converting the original simulation-optimization model into an interval linear programming model. By using the risk explicit interval linear programming algorithm and an inverse mapping scheme to implicitly resolve nonlinearity in the interval linear programming model, the NIMS approach efficiently obtained near-optimal solutions of the original simulation-optimization problem. The NIMS approach was applied to a case study on Wissahickon Creek in Pennsylvania, with the objective of finding optimal carbonaceous biological oxygen demand and ammonia (NH4) point source waste load allocations, subject to daily average and minimum dissolved oxygen compliance constraints at multiple points along the stream. First, a simulation-optimization model was formulated for this case study. Next, a genetic algorithm was used to solve the problem to produce reference optimal solutions. Finally, the simulation-optimization model was solved using the proposed NIMS, and the obtained solutions were compared with the reference solutions to demonstrate the superior computational efficiency and solution quality of the NIMS.
Agrawal, Gaurav; Kawajiri, Yoshiaki
2012-05-18
Over the past decade, many modifications have been proposed in simulated moving bed (SMB) chromatography in order to effectively separate a binary mixture. However, the separation of multi-component mixtures using SMB is still one of the major challenges. In addition, the performance of SMB system highly depends on its operating conditions. Our study address this issue by formulating a multi-objective optimization problem that maximizes the productivity and purity of intermediate eluting component at the same time. A number of optimized isocractic ternary SMB operating schemes are compared both in terms of productivity and amount of desorbent to feed ratio. Furthermore, we propose a generalized full cycle (GFC) formulation based on superstructure formulation encompassing numerous operating schemes proposed in the literature. We also demonstrate that this approach has a potential to find the best ternary separation strategy among various alternatives. PMID:22498352
High-order sampling schemes for path integrals and Gaussian chain simulations of polymers
Müser, Martin H.; Müller, Marcus
2015-05-07
In this work, we demonstrate that path-integral schemes, derived in the context of many-body quantum systems, benefit the simulation of Gaussian chains representing polymers. Specifically, we show how to decrease discretization corrections with little extra computation from the usual O(1/P{sup 2}) to O(1/P{sup 4}), where P is the number of beads representing the chains. As a consequence, high-order integrators necessitate much smaller P than those commonly used. Particular emphasis is placed on the questions of how to maintain this rate of convergence for open polymers and for polymers confined by a hard wall as well as how to ensure efficient sampling. The advantages of the high-order sampling schemes are illustrated by studying the surface tension of a polymer melt and the interface tension in a binary homopolymers blend.
Tank waste remediation system optimized processing strategy with an altered treatment scheme
Slaathaug, E.J.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility.
Mishra, S.; Kappiyoor, R.
2015-01-01
X-ray luminescent computed tomography (XLCT) is a promising new functional imaging modality based on computed tomography (CT). This imaging technique uses X-ray excitable nanophosphors to illuminate objects of interest in the visible spectrum. Though there are several validations of the underlying technology, none of them have addressed the issues of performance optimality for a given design of the imaging system. This study addresses the issue of obtaining best image quality through optimizing collimator width to balance the signal to noise ratio (SNR) and resolution. The results can be generalized as to any XLCT system employing a selective excitation scheme. PMID:25642356
Jin, Biao; Laskov, Christine; Rolle, Massimo; Haderlein, Stefan B
2011-06-15
Compound-specific online chlorine isotope analysis of chlorinated hydrocarbons was evaluated and validated using gas chromatography coupled to a regular quadrupole mass spectrometer (GC-qMS). This technique avoids tedious off-line sample pretreatments, but requires mathematical data analysis to derive chlorine isotope ratios from mass spectra. We compared existing evaluation schemes to calculate chlorine isotope ratios with those that we modified or newly proposed. We also tested systematically important experimental procedures such as external vs. internal referencing schemes, and instrumental settings including split ratio, ionization energy, and dwell times. To this end, headspace samples of tetrachloroethene (PCE), trichloroethene (TCE), and cis-dichloroethene (cDCE) at aqueous concentrations in the range of 20-500 μg/L (amount on-column range: 3.2-115 pmol) were analyzed using GC-qMS. The results (³⁷Cl/³⁵Cl ratios) showed satisfying to good precisions with relative standard deviations (n = 5) between 0.4‰ and 2.1‰. However, we found that the achievable precision considerably varies depending on the applied data evaluation scheme, the instrumental settings, and the analyte. A systematic evaluation of these factors allowed us to optimize the GC-qMS technique to determine chlorine isotope ratios of chlorinated organic contaminants. PMID:21612209
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
Comparison of rainfall sampling schemes using a calibrated stochastic rainfall generator
Welles, E.
1994-12-31
Accurate rainfall measurements are critical to river flow predictions. Areal and gauge rainfall measurements create different descriptions of the same storms. The purpose of this study is to characterize those differences. A stochastic rainfall generator was calibrated using an automatic search algorithm. Statistics describing several rainfall characteristics of interest were used in the error function. The calibrated model was then used to generate storms which were exhaustively sampled, sparsely sampled and sampled areally with 4 x 4 km grids. The sparsely sampled rainfall was also kriged to 4 x 4 km blocks. The differences between the four schemes were characterized by comparing statistics computed from each of the sampling methods. The possibility of predicting areal statistics from gauge statistics was explored. It was found that areally measured storms appeared to move more slowly, appeared larger, appeared less intense and have shallower intensity gradients.
Xing, Changhu; Jensen, Colby; Folsom, Charles; Ban, Heng; Marshall, Douglas W.
2014-01-01
In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivitiesof the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation.
NASA Astrophysics Data System (ADS)
Hong, S.; Yu, X.; Park, S. K.; Choi, Y.-S.; Myoung, B.
2014-10-01
Optimization of land surface models has been challenging due to the model complexity and uncertainty. In this study, we performed scheme-based model optimizations by designing a framework for coupling "the micro-genetic algorithm" (micro-GA) and "the Noah land surface model with multiple physics options" (Noah-MP). Micro-GA controls the scheme selections among eight different land surface parameterization categories, each containing 2-4 schemes, in Noah-MP in order to extract the optimal scheme combination that achieves the best skill score. This coupling framework was successfully applied to the optimizations of evapotranspiration and runoff simulations in terms of surface water balance over the Han River basin in Korea, showing outstanding speeds in searching for the optimal scheme combination. Taking advantage of the natural selection mechanism in micro-GA, we explored the model sensitivity to scheme selections and the scheme interrelationship during the micro-GA evolution process. This information is helpful for better understanding physical parameterizations and hence it is expected to be effectively used for further optimizations with uncertain parameters in a specific set of schemes.
Vonderheide, Anne P; Kauffman, Peter E; Hieber, Thomas E; Brisbin, Judith A; Melnyk, Lisa Jo; Morgan, Jeffrey N
2009-03-25
Analysis of an individual's total daily food intake may be used to determine aggregate dietary ingestion of given compounds. However, the resulting composite sample represents a complex mixture, and measurement of such can often prove to be difficult. In this work, an analytical scheme was developed for the determination of 12 select pyrethroid pesticides in dietary samples. In the first phase of the study, several cleanup steps were investigated for their effectiveness in removing interferences in samples with a range of fat content (1-10%). Food samples were homogenized in the laboratory, and preparatory techniques were evaluated through recoveries from fortified samples. The selected final procedure consisted of a lyophilization step prior to sample extraction. A sequential 2-fold cleanup procedure of the extract included diatomaceous earth for removal of lipid components followed with a combination of deactivated alumina and C(18) for the simultaneous removal of polar and nonpolar interferences. Recoveries from fortified composite diet samples (10 microg kg(-1)) ranged from 50.2 to 147%. In the second phase of this work, three instrumental techniques [gas chromatography-microelectron capture detection (GC-microECD), GC-quadrupole mass spectrometry (GC-quadrupole-MS), and GC-ion trap-MS/MS] were compared for greatest sensitivity. GC-quadrupole-MS operated in selective ion monitoring (SIM) mode proved to be most sensitive, yielding method detection limits of approximately 1 microg kg(-1). The developed extraction/instrumental scheme was applied to samples collected in an exposure measurement field study. The samples were fortified and analyte recoveries were acceptable (75.9-125%); however, compounds coextracted from the food matrix prevented quantitation of four of the pyrethroid analytes in two of the samples considered. PMID:19292459
McGregor, D.A.
1993-07-01
The purpose of the Human Genome Project is outlined followed by a discussion of electrophoresis in slab gels and capillaries and its application to deoxyribonucleic acid (DNA). Techniques used to modify electroosmotic flow in capillaries are addressed. Several separation and detection schemes for DNA via gel and capillary electrophoresis are described. Emphasis is placed on the elucidation of DNA fragment size in real time and shortening separation times to approximate real time monitoring. The migration of DNA fragment bands through a slab gel can be monitored by UV absorption at 254 nm and imaged by a charge coupled device (CCD) camera. Background correction and immediate viewing of band positions to interactively change the field program in pulsed-field gel electrophoresis are possible throughout the separation. The use of absorption removes the need for staining or radioisotope labeling thereby simplifying sample preparation and reducing hazardous waste generation. This leaves the DNA in its native state and further analysis can be performed without de-staining. The optimization of several parameters considerably reduces total analysis time. DNA from 2 kb to 850 kb can be separated in 3 hours on a 7 cm gel with interactive control of the pulse time, which is 10 times faster than the use of a constant field program. The separation of {Phi}X174RF DNA-HaeIII fragments is studied in a 0.5% methyl cellulose polymer solution as a function of temperature and applied voltage. The migration times decreased with both increasing temperature and increasing field strength, as expected. The relative migration rates of the fragments do not change with temperature but are affected by the applied field. Conditions were established for the separation of the 271/281 bp fragments, even without the addition of intercalating agents. At 700 V/cm and 20{degrees}C, all fragments are separated in less than 4 minutes with an average plate number of 2.5 million per meter.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
Li, Jianzhong
2014-04-21
In this paper, a novel secure optimal image watermarking scheme using an encrypted gyrator transform computer generated hologram (CGH) in the contourlet domain is presented. A new encrypted CGH technique, which is based on the gyrator transform, the random phase mask, the three-step phase-shifting interferometry and the Fibonacci transform, has been proposed to produce a hologram of a watermark first. With the huge key space of the encrypted CGH, the security strength of the watermarking system is enhanced. To achieve better imperceptibility, an improved quantization embedding algorithm is proposed to embed the encrypted CGH into the low frequency sub-band of the contourlet-transformed host image. In order to obtain the highest possible robustness without losing the imperceptibility, particle swarm optimization algorithm is employed to search the optimal embedding parameter of the watermarking system. In comparison with other method, the proposed watermarking scheme offers better performances for both imperceptibility and robustness. Experimental results demonstrate that the proposed image watermarking is not only secure and invisible, but also robust against a variety of attacks. PMID:24787882
A genetic algorithm based multi-objective shape optimization scheme for cementless femoral implant.
Chanda, Souptick; Gupta, Sanjay; Kumar Pratihar, Dilip
2015-03-01
The shape and geometry of femoral implant influence implant-induced periprosthetic bone resorption and implant-bone interface stresses, which are potential causes of aseptic loosening in cementless total hip arthroplasty (THA). Development of a shape optimization scheme is necessary to achieve a trade-off between these two conflicting objectives. The objective of this study was to develop a novel multi-objective custom-based shape optimization scheme for cementless femoral implant by integrating finite element (FE) analysis and a multi-objective genetic algorithm (GA). The FE model of a proximal femur was based on a subject-specific CT-scan dataset. Eighteen parameters describing the nature of four key sections of the implant were identified as design variables. Two objective functions, one based on implant-bone interface failure criterion, and the other based on resorbed proximal bone mass fraction (BMF), were formulated. The results predicted by the two objective functions were found to be contradictory; a reduction in the proximal bone resorption was accompanied by a greater chance of interface failure. The resorbed proximal BMF was found to be between 23% and 27% for the trade-off geometries as compared to ∼39% for a generic implant. Moreover, the overall chances of interface failure have been minimized for the optimal designs, compared to the generic implant. The adaptive bone remodeling was also found to be minimal for the optimally designed implants and, further with remodeling, the chances of interface debonding increased only marginally. PMID:25392855
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC
In-depth analysis of sampling optimization methods
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Kim, Myoungsoo; Habets, Boris; Buhl, Stefan; Guhlemann, Steffen; Rößiger, Martin; Bellmann, Enrico; Kim, Seop
2016-03-01
High order overlay and alignment models require good coverage of overlay or alignment marks on the wafer. But dense sampling plans are not possible for throughput reasons. Therefore, sampling plan optimization has become a key issue. We analyze the different methods for sampling optimization and discuss the different knobs to fine-tune the methods to constraints of high volume manufacturing. We propose a method to judge sampling plan quality with respect to overlay performance, run-to-run stability and dispositioning criteria using a number of use cases from the most advanced lithography processes.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Optimal control of chaotic systems with input saturation using an input-state linearization scheme
NASA Astrophysics Data System (ADS)
Fuh, Chyun-Chau
2009-08-01
Chaos is undesirable in many engineering applications since it causes a serious degradation of the system performance and restricts the system's operating range. Therefore, the problem of controlling chaos has attracted intense interest in recent years. This paper proposes an approach for optimizing the control of chaotic systems with input saturation using an input-state linearization scheme. In the proposed approach, the optimal system gains are identified using the Nelder-Mead simplex algorithm. This algorithm does not require the derivatives of the cost function (or the performance index) to be optimized, and is therefore particularly applicable to problems with undifferentiable elements or discontinuities. Two numerical simulations are performed to demonstrate the feasibility and effectiveness of the proposed method.
Optimization of remedial pumping schemes for a ground-water site with multiple contaminants
Xiang, Y.; Sykes, J.F.; Thomson, N.R.
1996-01-01
This paper presents an optimization analysis of the remedial pumping design for a contaminated aquifer located in Elmira, Ontario, Canada. The remediation task presented in the paper is to remove two ground-water contaminant species, NDMA (N-nitrosodimethylamine) and chlorobenzene, to such extent that the specified ground-water quality standards are met. The contaminants, NDMA and chlorobenzene, have different initial plume configurations and retardation characteristics. The required quality standard for NDMA is five orders of magnitude smaller than the initial peak concentration. The objective is to minimize total pumping, and the constraints incorporate ground-water quality requirements on the maximum and the spatially averaged residual concentrations, with contaminant source control being considered. On the combination of simulation and optimization, the results of this study indicate that the performance of an optimization algorithm based on gradient search is controlled by the specified cleanup levels, and that contaminant concentrations can be nonconvex and nonsmooth for some pumping schemes.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
An optimal interpolation scheme for the assimilation of spectral wave data
NASA Astrophysics Data System (ADS)
Hasselmann, S.; Lionello, P.; Hasselmann, K.
1997-07-01
An optimal interpolation scheme for assimilating two-dimensional wave spectra is presented which is based on a decomposition of the spectrum into principal wave systems. Each wave system is represented by three characteristic parameters: significant wave height, mean propagation direction, and mean frequency. The spectrum is thereby reduced to a manageable number of parameters. From the correction of the wind-sea system a correction of the local wind is derived. A 2-month test of the system using wave spectra retrieved from ERS 1 synthetic aperture radar wave mode data in the Atlantic yielded consistent corrections of winds and waves. However, the corrected wind data alone, although valuable in identifying wind errors in critical high wind speed regions, are too sparsely distributed in space and time to be used in isolation and need to be combined with other data in an atmospheric data assimilation scheme. This emphasizes the need for the development of combined wind and wave data assimilation schemes for the optimal use of satellite wind and wave data.
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. PMID:25820090
Optimal sampling schedule for chemical exchange saturation transfer.
Tee, Y K; Khrapitchev, A A; Sibson, N R; Payne, S J; Chappell, M A
2013-11-01
The sampling schedule for chemical exchange saturation transfer imaging is normally uniformly distributed across the saturation frequency offsets. When this kind of evenly distributed sampling schedule is used to quantify the chemical exchange saturation transfer effect using model-based analysis, some of the collected data are minimally informative to the parameters of interest. For example, changes in labile proton exchange rate and concentration mainly affect the magnetization near the resonance frequency of the labile pool. In this study, an optimal sampling schedule was designed for a more accurate quantification of amine proton exchange rate and concentration, and water center frequency shift based on an algorithm previously applied to magnetization transfer and arterial spin labeling. The resulting optimal sampling schedule samples repeatedly around the resonance frequency of the amine pool and also near to the water resonance to maximize the information present within the data for quantitative model-based analysis. Simulation and experimental results on tissue-like phantoms showed that greater accuracy and precision (>30% and >46%, respectively, for some cases) were achieved in the parameters of interest when using optimal sampling schedule compared with evenly distributed sampling schedule. Hence, the proposed optimal sampling schedule could replace evenly distributed sampling schedule in chemical exchange saturation transfer imaging to improve the quantification of the chemical exchange saturation transfer effect and parameter estimation. PMID:23315799
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-08-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Cao, Tong; Chen, Liao; Yu, Yu; Zhang, Xinliang
2014-12-29
We propose and experimentally demonstrate a novel scheme which can simultaneously realize wavelength-preserving and phase-preserving amplitude noise compression of a 40 Gb/s distorted non-return-to-zero differential-phase-shift keying (NRZ-DPSK) signal. In the scheme, two semiconductor optical amplifiers (SOAs) are exploited. The first one (SOA1) is used to generate the inverted signal based on SOA's transient cross-phase modulation (T-XPM) effect and the second one (SOA2) to regenerate the distorted NRZ-DPSK signal using SOA's cross-gain compression (XGC) effect. In the experiment, the bit error ratio (BER) measurements show that power penalties of constructive and destructive demodulation at BER of 10^{-9} are -1.75 and -1.01 dB, respectively. As the nonlinear effects and the requirements of the two SOAs are completely different, quantum-well (QW) structures has been separately optimized. A complicated theoretical model by combining QW band structure calculation with SOA's dynamic model is exploited to optimize the SOAs, in which both interband effect (carrier density variation) and intraband effect (carrier temperature variation) are taken into account. Regarding SOA1, we choose the tensile strained QW structure and large optical confinement factor to enhance the T-XPM effect. Regarding SOA2, the compressively strained QW structure is selected to reduce the impact of excess phase noise induced by amplitude fluctuations. Exploiting the optimized QW SOAs, better amplitude regeneration performance is demonstrated successfully through numerical simulation. The proposed scheme is intrinsically stable comparing with the interferometer structure and can be integrated on a chip, making it a practical candidate for all-optical amplitude regeneration of high-speed NRZ-DPSK signal. PMID:25607178
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency
GENERAL: Optimal Schemes of Teleportation One-Particle State by a Three-Particle General W State
NASA Astrophysics Data System (ADS)
Zha, Xin-Wei; Song, Hai-Yang
2010-05-01
Recently, Xiu et al. [Common. Theor. Phys. 49 (2008) 905] proposed two schemes of teleporting an N particle arbitrary and unknown state when N groups of three particle general W states are utilized as quantum channels. They gave the maximal probability of successful teleportation. Here we find that their operation is not the optimal and the successful probability of the teleportation is not maximum. Moreover, we give the optimal schemes operation and obtain maximal successful probability for teleportation.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
NASA Astrophysics Data System (ADS)
Zhang, S.; Yin, J.; Zhang, H. W.; Chen, B. S.
2016-03-01
Phoxonic crystal (PXC) is a promising artificial periodic material for optomechanical systems and acousto-optical devices. The multi-objective topology optimization of dual phononic and photonic max relative bandgaps in a kind of two-dimensional (2D) PXC is investigated to find the regular pattern of topological configurations. In order to improve the efficiency, a multi-level substructure scheme is proposed to analyze phononic and photonic band structures, which is stable, efficient and less memory-consuming. The efficient and reliable numerical algorithm provides a powerful tool to optimize and design crystal devices. The results show that with the reduction of the relative phononic bandgap (PTBG), the central dielectric scatterer becomes smaller and the dielectric veins of cross-connections between different dielectric scatterers turn into the horizontal and vertical shape gradually. These characteristics can be of great value to the design and synthesis of new materials with different topological configurations for applications of the PXC.
Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter
2016-01-01
Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. PMID:26283620
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry
2015-02-01
The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors. PMID:25178859
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Optimization of protein samples for NMR using thermal shift assays.
Kozak, Sandra; Lercher, Lukas; Karanth, Megha N; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane
2016-04-01
Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor(®) provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies. PMID:26984476
The imaging method and sampling scheme of rotation scanning interferometric radiometer
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wu, Ji; Sun, Weiying
2008-11-01
Rotation scanning interferometric radiometer is a kind of new proposed time shared imaging concept for purpose of further decreasing the hardware complexity and increasing the spatial resolution. The main problem need to be solved is the image reconstruction from the rotation sampling visibilities. In this study we develop a Pseudo-Polar FFT algorithm that suitable for dealing with the polar sampling grid data of rotation scanning system. It takes pseudo polar grid as the conversion destination instead of traditional Cartesian rectangular grid before the Fourier inversion. The involved effective 1D interpolations and 1D-FFT routines in this imaging algorithm guaranteed a high accuracy and computational efficiency. Moreover we analyzed the associated rotation sampling scheme such as the antenna array arrangement and rotation sampling interval which have great effects on the reconstruction results. Numerical simulations are present for validating the superiority of this new imaging algorithm. Simulation results also indicated that the non-redundant plane antenna array with good linearity is the prefer array arrangement for rotation scanning system.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design. PMID:26387349
Bellanti, Francesco; Di Iorio, Vincenzo Luca; Danhof, Meindert; Della Pasqua, Oscar
2016-09-01
Despite wide clinical experience with deferiprone, the optimum dosage in children younger than 6 years remains to be established. This analysis aimed to optimize the design of a prospective clinical study for the evaluation of deferiprone pharmacokinetics in children. A 1-compartment model with first-order oral absorption was used for the purposes of the analysis. Different sampling schemes were evaluated under the assumption of a constrained population size. A sampling scheme with 5 samples per subject was found to be sufficient to ensure accurate characterization of the pharmacokinetics of deferiprone. Whereas the accuracy of parameters estimates was high, precision was slightly reduced because of the small sample size (CV% >30% for Vd/F and KA). Mean AUC ± SD was found to be 33.4 ± 19.2 and 35.6 ± 20.2 mg · h/mL, and mean Cmax ± SD was found to be 10.2 ± 6.1 and 10.9 ± 6.7 mg/L based on sparse and frequent sampling, respectively. The results showed that typical frequent sampling schemes and sample sizes do not warrant accurate model and parameter identifiability. Expectation of the determinant (ED) optimality and simulation-based optimization concepts can be used to support pharmacokinetic bridging studies. Of importance is the accurate estimation of the magnitude of the covariate effects, as they partly determine the dose recommendation for the population of interest. PMID:26785826
Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.
Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V
2016-05-01
Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem
NASA Astrophysics Data System (ADS)
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst—only to start it all over again—may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem.
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst-only to start it all over again-may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed. PMID:26764608
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
SamACO: variable sampling ant colony optimization algorithm for continuous optimization.
Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou
2010-12-01
An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409
Scheme of optical fiber temperature sensor employing deep-grooved process optimization
NASA Astrophysics Data System (ADS)
Liu, Yu; Liu, Cong; Xiang, Gaolin; Wang, Ruijie; Wang, Yibing; Xiang, Lei; Wu, Linzhi; Liu, Song
2015-03-01
To optimize the optical fiber temperature sensor employing the deep-grooved process, a novel scheme was proposed. Fabricated by the promising CO2 laser irradiation system based on the two-dimensional scanning motorized stage with high precision, the novel deep-grooved optical fiber temperature sensor was obtained with its temperature sensitivity of the transmission attenuation -0.107 dB/°C, which was 18.086 times higher than the optical fiber sensor with the normal depth of grooves while other parameters remained unchanged. The principal research and experimental testing showed that the designed temperature sensor measurement unit had the ability of high sensitivity in transmission attenuation and insensitivity to the wavelength, which offers possible applications in engineering.
Layered HEVC/H.265 video transmission scheme based on hierarchical QAM optimization
NASA Astrophysics Data System (ADS)
Feng, Weidong; Zhou, Cheng; Xiong, Chengyi; Chen, Shaobo; Wang, Junxi
2015-12-01
High Efficiency Video Coding (HEVC) is the state-of-art video compression standard which fully support scalability features and is able to generate layered video streams with unequal importance. Unfortunately, when the base layer (BL) which is more importance to the stream is lost during the transmission, the enhancement layer (EL) based on the base layer must be discarded by receiver. Obviously, using the same transmittal strategies for BL and EL is unreasonable. This paper proposed an unequal error protection (UEP) system using different hierarchical amplitude modulation (HQAM). The BL data with high priority are mapped into the most reliable HQAM mode and the EL data with low priority are mapped into HQAM mode with fast transmission efficiency. Simulations on scalable HEVC codec show that the proposed optimized video transmission system is more attractive than the traditional equal error protection (EEP) scheme because it effectively balances the transmission efficiency and reconstruction video quality.
163 years of refinement: the British Geological Survey sample registration scheme
NASA Astrophysics Data System (ADS)
Howe, M. P.
2011-12-01
The British Geological Survey manages the largest UK geoscience samples collection, including: - 15,000 onshore boreholes, including over 250 km of drillcore - Vibrocores, gravity cores and grab samples from over 32,000 UK marine sample stations. 640 boreholes - Over 3 million UK fossils, including a "type and stratigraphic" reference collection of 250,000 fossils, 30,000 of which are "type, figured or cited" - Comprehensive microfossil collection, including many borehole samples - 290km of drillcore and 4.5 million cuttings samples from over 8000 UK continental shelf hydrocarbon wells - Over one million mineralogical and petrological samples, including 200,00 thin sections The current registration scheme was introduced in 1848 and is similar to that used by Charles Darwin on the Beagle. Every Survey collector or geologist has been issue with a unique prefix code of one or more letters and these were handwritten on preprinted numbers, arranged in books of 1 - 5,000 and 5,001 to 10,000. Similar labels are now computer printed. Other prefix codes are used for corporate collections, such as borehole samples, thin sections, microfossils, macrofossil sections, museum reference fossils, display quality rock samples and fossil casts. Such numbers infer significant immediate information to the curator, without the need to consult detailed registers. The registration numbers have been recorded in a series of over 1,000 registers, complete with metadata including sample ID, locality, horizon, collector and date. Citations are added as appropriate. Parent-child relationships are noted when re-registering subsubsamples. For example, a borehole sample BDA1001 could have been subsampled for a petrological thin section and off-cut (E14159), a fossil thin section (PF365), micropalynological slides (MPA273), one of which included a new holotype (MPK111), and a figured macrofossil (GSE1314). All main corporate collection now have publically-available online databases, such as Palaeo
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to
Novel multi-sample scheme for inferring phylogenetic markers from whole genome tumor profiles
Subramanian, Ayshwarya; Shackney, Stanley; Schwartz, Russell
2013-01-01
Computational cancer phylogenetics seeks to enumerate the temporal sequences of aberrations in tumor evolution, thereby delineating the evolution of possible tumor progression pathways, molecular subtypes and mechanisms of action. We previously developed a pipeline for constructing phylogenies describing evolution between major recurring cell types computationally inferred from whole-genome tumor profiles. The accuracy and detail of the phylogenies, however, depends on the identification of accurate, high-resolution molecular markers of progression, i.e., reproducible regions of aberration that robustly differentiate different subtypes and stages of progression. Here we present a novel hidden Markov model (HMM) scheme for the problem of inferring such phylogenetically significant markers through joint segmentation and calling of multi-sample tumor data. Our method classifies sets of genome-wide DNA copy number measurements into a partitioning of samples into normal (diploid) or amplified at each probe. It differs from other similar HMM methods in its design specifically for the needs of tumor phylogenetics, by seeking to identify robust markers of progression conserved across a set of copy number profiles. We show an analysis of our method in comparison to other methods on both synthetic and real tumor data, which confirms its effectiveness for tumor phylogeny inference and suggests avenues for future advances. PMID:24407301
Singal, Ashok K.
2014-07-01
We examine the consistency of the unified scheme of Fanaroff-Riley type II radio galaxies and quasars with their observed number and size distributions in the 3CRR sample. We separate the low-excitation galaxies from the high-excitation ones, as the former might not harbor a quasar within and thus may not be partaking in the unified scheme models. In the updated 3CRR sample, at low redshifts (z < 0.5), the relative number and luminosity distributions of high-excitation galaxies and quasars roughly match the expectations from the orientation-based unified scheme model. However, a foreshortening in the observed sizes of quasars, which is a must in the orientation-based model, is not seen with respect to radio galaxies even when the low-excitation galaxies are excluded. This dashes the hope that the unified scheme might still work if one includes only the high-excitation galaxies.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, L.; Zhang, Y.
2011-12-01
Uncertainty Quantification (UQ) of a model's tunable parameters is often treated as an optimization procedure to minimize the difference between model results and observations at different time and spatial scales. In current tuning process in global climate model, however, we might be generating a set of tunable parameters that approximate the observed climate but via an unrealistic balance of physical processes and/or compensating errors over different regions in the globe. In this study, we run the Weather Research and Forecasting (WRF) regional model constrained by the reanalysis data over the Southern Great Plains (SGP) where abundant observational data are available for calibration of the input parameters and validation of the model results. Our goal is to reduce the uncertainty ranges and identify the optimal values of five key input parameters in a new Kain-Frisch (KF) convective parameterization scheme used in the WRF model. A stochastic sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA), is employed to efficiently select the parameters values based on the skill score so that the algorithm progressively moves toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP show that the model bias for precipitation can be significantly reduced by using five optimal parameters identified by the MVFSA algorithm. The model performance is sensitive to downdraft and entrainment related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreases as the ratio of downdraft to updraft flux increases. Larger CAPE consumption time results in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by only constraining precipitation generates positive impact on the other output variables, such as temperature and wind. The simulated precipitation over the same region
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
An optimal scaling scheme for DCO-OFDM based visible light communications
NASA Astrophysics Data System (ADS)
Jiang, Rui; Wang, Qi; Wang, Fang; Dai, Linglong; Wang, Zhaocheng
2015-12-01
DC-biased optical orthogonal frequency-division multiplexing (DCO-OFDM) is widely used in visible light communication (VLC) systems to provide high data rate transmission. As intensity modulation with direct detection (IM/DD) is employed to modulate the OFDM signal, scale up the amplitude of the signal can increase the effective transmitted electrical power whereas more signals are likely to be clipped due to the limited dynamic range of LEDs, resulting in severe clipping distortion. Thus, it is crucial to scale the signal to find a tradeoff between the effective electrical power and the clipping distortion. In this paper, an optimal scaling scheme is proposed to maximize the received signal-to-noise-plus-distortion ratio (SNDR) with the constraint of the radiated optical power in a practical scenario where DC bias is fixed for a desired dimming level. Simulation results show that the system with the optimal scaling factor outperforms that with fixed scaling factor under different equivalent noise power in terms of the bit error ratio (BER) performance.
Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering
Yeh, Wei-Chang; Lai, Chyh-Ming
2015-01-01
Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS) and rapid centralized strategy (RCS) in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions. PMID:26348483
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
NASA Astrophysics Data System (ADS)
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-01
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
NASA Astrophysics Data System (ADS)
Kojima, Sadaoki; Zhe, Zhang; Sawada, Hiroshi; Firex Team
2015-11-01
In Fast Ignition Inertial Confinement Fusion, optimization of relativistic electron beam (REB) accelerated by a high-intensity laser pulse is critical for the efficient core heating. The high-energy tail of the electron spectrum is generated by the laser interaction with a long-scale-length plasma and does not efficiently couple to a fuel core. In the cone-in-shell scheme, long-scale-length plasmas can be produced inside the cone by the pedestal of a high-intensity laser, radiation heating of the inner cone wall and shock wave from an implosion core. We have investigated a relation between the presence of pre-plasma inside the cone and the REB energy distribution using the Gekko XII and 2kJ-PW LFEX laser at the Institute of Laser Engineering. The condition of an inner cone wall was monitored using VISAR and SOP systems on a cone-in-shell implosion. The generation of the REB was measured with an electron energy analyzer and a hard x-ray spectrometer on a separate shot by injecting the LFEX laser in an imploded target. The result shows the strong correlation between the preheat and high-energy tail generation. Optimization of cone-wall thickness for the fast-ignition will be discussed. This work is supported by NIFS, MEXT/JSPS KAKENHI Grant and JSPS Fellows (Grant Number 14J06592).
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations.
Sorella, S; Devaux, N; Dagrada, M; Mazzola, G; Casula, M
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen. PMID:26723656
Classifier-Guided Sampling for Complex Energy System Optimization
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
NASA Astrophysics Data System (ADS)
Schwientek, Marc; Guillet, Gaelle; Kuch, Bertram; Rügner, Hermann; Grathwohl, Peter
2014-05-01
Xenobiotic contaminants such as pharmaceuticals or personal care products typically are continuously introduced into the receiving water bodies via wastewater treatment plant (WWTP) outfalls and, episodically, via combined sewer overflows in the case of precipitation events. Little is known about how these chemicals behave in the environment and how they affect ecosystems and human health. Examples of traditional persistent organic pollutants reveal, that they may still be present in the environment even decades after they have been released. In this study a sampling strategy was developed which gives valuable insights into the environmental behaviour of xenobiotic chemicals. The method is based on the Lagrangian sampling scheme by which a parcel of water is sampled repeatedly as it moves downstream while chemical, physical, and hydrologic processes altering the characteristics of the water mass can be investigated. The Steinlach is a tributary of the River Neckar in Southwest Germany with a catchment area of 140 km². It receives the effluents of a WWTP with 99,000 inhabitant equivalents 4 km upstream of its mouth. The varying flow rate of effluents induces temporal patterns of electrical conductivity in the river water which enable to track parcels of water along the subsequent urban river section. These parcels of water were sampled a) close to the outlet of the WWTP and b) 4 km downstream at the confluence with the Neckar. Sampling was repeated at a 15 min interval over a complete diurnal cycle and 2 h composite samples were prepared. A model-based analysis demonstrated, on the one hand, that substances behaved reactively to a varying extend along the studied river section. On the other hand, it revealed that the observed degradation rates are likely dependent on the time of day. Some chemicals were degraded mainly during daytime (e.g. the disinfectant Triclosan or the phosphorous flame retardant TDCP), others as well during nighttime (e.g. the musk fragrance
Efficient infill sampling for unconstrained robust optimization problems
NASA Astrophysics Data System (ADS)
Rehman, Samee Ur; Langelaar, Matthijs
2016-08-01
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
Optimized robust plasma sampling for glomerular filtration rate studies.
Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L
2012-09-01
In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement. PMID:22825040
Optimization of the combined proton acceleration regime with a target composition scheme
NASA Astrophysics Data System (ADS)
Yao, W. P.; Li, B. W.; Zheng, C. Y.; Liu, Z. J.; Yan, X. Q.; Qiao, B.
2016-01-01
A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CH target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 1022 W cm-2, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.
Test samples for optimizing STORM super-resolution microscopy.
Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E
2013-01-01
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Test Samples for Optimizing STORM Super-Resolution Microscopy
Metcalf, Daniel J.; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E.
2013-01-01
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
A General Investigation of Optimized Atmospheric Sample Duration
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Fixed-sample optimization using a probability density function
Barnett, R.N.; Sun, Zhiwei; Lester, W.A. Jr. |
1997-12-31
We consider the problem of optimizing parameters in a trial function that is to be used in fixed-node diffusion Monte Carlo calculations. We employ a trial function with a Boys-Handy correlation function and a one-particle basis set of high quality. By employing sample points picked from a positive definite distribution, parameters that determine the nodes of the trial function can be varied without introducing singularities into the optimization. For CH as a test system, we find that a trial function of high quality is obtained and that this trial function yields an improved fixed-node energy. This result sheds light on the important question of how to improve the nodal structure and, thereby, the accuracy of diffusion Monte Carlo.
Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023
NASA Astrophysics Data System (ADS)
Mielke, Steven L.; Truhlar, Donald G.
2016-01-01
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Inhibition of viscous fluid fingering: A variational scheme for optimal flow rates
NASA Astrophysics Data System (ADS)
Miranda, Jose; Dias, Eduardo; Alvarez-Lacalle, Enrique; Carvalho, Marcio
2012-11-01
Conventional viscous fingering flow in radial Hele-Shaw cells employs a constant injection rate, resulting in the emergence of branched interfacial shapes. The search for mechanisms to prevent the development of these bifurcated morphologies is relevant to a number of areas in science and technology. A challenging problem is how best to choose the pumping rate in order to restrain growth of interfacial amplitudes. We use an analytical variational scheme to look for the precise functional form of such an optimal flow rate. We find it increases linearly with time in a specific manner so that interface disturbances are minimized. Experiments and nonlinear numerical simulations support the effectiveness of this particularly simple, but not at all obvious, pattern controlling process. J.A.M., E.O.D. and M.S.C. thank CNPq/Brazil for financial support. E.A.L. acknowledges support from Secretaria de Estado de IDI Spain under project FIS2011-28820-C02-01.
Khanlou, Khosro Mehdi; Vandepitte, Katrien; Asl, Leila Kheibarshekan; Van Bockstaele, Erik
2011-04-01
Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci) from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He) and Shannon diversity index (I) were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation. PMID:21734826
NASA Astrophysics Data System (ADS)
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Optimization of Evans blue quantitation in limited rat tissue samples
NASA Astrophysics Data System (ADS)
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-01
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. PMID:26775132
Sampling plan optimization for detection of lithography and etch CD process excursions
NASA Astrophysics Data System (ADS)
Elliott, Richard C.; Nurani, Raman K.; Lee, Sung Jin; Ortiz, Luis G.; Preil, Moshe E.; Shanthikumar, J. G.; Riley, Trina; Goodwin, Greg A.
2000-06-01
Effective sample planning requires a careful combination of statistical analysis and lithography engineering. In this paper, we present a complete sample planning methodology including baseline process characterization, determination of the dominant excursion mechanisms, and selection of sampling plans and control procedures to effectively detect the yield- limiting excursions with a minimum of added cost. We discuss the results of our novel method in identifying critical dimension (CD) process excursions and present several examples of poly gate Photo and Etch CD excursion signatures. Using these results in a Sample Planning model, we determine the optimal sample plan and statistical process control (SPC) chart metrics and limits for detecting these excursions. The key observations are that there are many different yield- limiting excursion signatures in photo and etch, and that a given photo excursion signature turns into a different excursion signature at etch with different yield and performance impact. In particular, field-to-field variance excursions are shown to have a significant impact on yield. We show how current sampling plan and monitoring schemes miss these excursions and suggest an improved procedure for effective detection of CD process excursions.
NASA Astrophysics Data System (ADS)
Sah, B. P.; Hämäläinen, J. M.; Sah, A. K.; Honji, K.; Foli, E. G.; Awudi, C.
2012-07-01
Accurate and reliable estimation of biomass in tropical forest has been a challenging task because a large proportion of forests are difficult to access or inaccessible. So, for effective implementation of REDD+ and fair benefit sharing, the proper designing of field plot sampling schemes plays a significant role in achieving robust biomass estimation. The existing forest inventory protocols using various field plot sampling schemes, including FAO's regular grid concept of sampling for land cover inventory at national level, are time and human resource intensive. Wall to wall LiDAR scanning is, however, a better approach to assess biomass with high precision and spatial resolution even though this approach suffers from high costs. Considering the above, in this study a sampling design based on a LiDAR strips sampling scheme has been devised for Ghanaian forests to support field plot sampling. Using Top-of-Atmosphere (TOA) reflectance value of satellite data, Land Use classification was carried out in accordance with IPCC definitions and the resulting classes were further stratified, incorporating existing GIS data of ecological zones in the study area. Employing this result, LiDAR sampling strips were allocated using systematic sampling techniques. The resulting LiDAR strips represented all forest categories, as well as other Land Use classes, with their distribution adequately representing the areal share of each category. In this way, out of at total area of 15,153km2 of the study area, LiDAR scanning was required for only 770 km2 (sampling intensity being 5.1%). We conclude that this systematic LiDAR sampling design is likely to adequately cover variation in above-ground biomass densities and serve as sufficient a-priori data, together with the Land Use classification produced, for designing efficient field plot sampling over the seven ecological zones.
NASA Astrophysics Data System (ADS)
Li, Jun; Liu, Bo; Zhao, Yan; Yang, Xiaodong; Lu, Xiaofeng; Wang, Lei
2015-04-01
This paper focuses on creating a new design method optimizing both aspirated compressor airfoil and the aspiration scheme simultaneously. The optimization design method is based on the artificial bee colony algorithm and the CST method, while the flow field is computed by one 2D computational program. The optimization process of the rotor tip and stator tip airfoil from an aspirated fan stage is demonstrated to verify the effectiveness of the new coupling method. The results show that the total pressure losses of the optimized stator tip and rotor tip airfoil are reduced relatively by 54% and 20%, respectively. Artificial bee colony algorithm and CST method indicate a satisfying applicability in aspirated airfoil optimization design. Finally, the features of aspirated airfoil designing process are concluded.
Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties
NASA Astrophysics Data System (ADS)
Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens
2013-04-01
The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field
Optimization of the development process for air sampling filter standards
NASA Astrophysics Data System (ADS)
Mena, RaJah Marie
Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.
2006-08-01
Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.
Optimization for Peptide Sample Preparation for Urine Peptidomics
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.
2014-02-25
when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.
Li, Hui-zhen; Tao, Dong-liang; Qi, Jian; Wu, Jin-guang; Xu, Yi-zhuang; Noda, Isao
2014-04-24
Two-dimensional (2D) synchronous spectroscopy together with a new approach called "Orthogonal Sample Design Scheme" was used to study the dipole-dipole interactions in two representative ternary chemical systems (N,N-dimethyllformamide (DMF)/CH3COOC2H5/CCl4 and C60/CH3COOC2H5/CCl4). For the first system, dipole-dipole interactions among carbonyl groups from DMF and CH3COOC2H5 are characterized by using the cross peak in 2D Fourier Transform Infrared Radiation (FT-IR) spectroscopy. For the second system, intermolecular interaction among π-π transition from C60 and vibration transition from the carbonyl band of ethyl acetate is probed by using 2D spectra. The experimental results demonstrate that "Orthogonal Sample Design Scheme" can effectively remove interfering part that is not relevant to intermolecular interaction. Additional procedures are carried out to preclude the possibilities of producing interfering cross peaks by other reasons, such as experimental errors. Dipole-dipole interactions that manifest in the form of deviation from the Beer-Lambert law generate distinct cross peaks visualized in the resultant 2D synchronous spectra of the two chemical systems. This work demonstrates that 2D synchronous spectra coupled with orthogonal sample design scheme provide us an applicable experimental approach to probing and characterizing dipole-dipole interactions in complex molecular systems. PMID:24582337
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... concentration measured in that sample. If the sample surface concentration is not equal to or lower than...
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong
2016-06-13
A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10^{-11} under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs. PMID:27410279
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications
NASA Astrophysics Data System (ADS)
Roser, Miguel; Villegas, Paulo
1994-05-01
In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.
NASA Astrophysics Data System (ADS)
Rasmussen, Troels Hels; Wang, Yang Min; Kjærgaard, Thomas; Kristensen, Kasper
2016-05-01
We augment the recently introduced same number of optimized parameters (SNOOP) scheme [K. Kristensen et al., J. Chem. Phys. 142, 114116 (2015)] for calculating interaction energies of molecular dimers with an F12 correction and generalize the method to enable the determination of interaction energies of general molecular clusters. The SNOOP, uncorrected (UC), and counterpoise (CP) schemes with/without an F12 correction are compared for the S22 test set of Jurečka et al. [Phys. Chem. Chem. Phys. 8, 1985 (2006)]—which consists of 22 molecular dimers of biological importance—and for water and methane molecular clusters. The calculations have been performed using the Resolution of the Identity second-order Møller-Plesset perturbation theory method. We conclude from the results that the SNOOP scheme generally yields interaction energies closer to the complete basis set limit value than the UC and CP approaches, regardless of whether the F12 correction is applied or not. Specifically, using the SNOOP scheme with an F12 correction yields the computationally most efficient way of achieving accurate results at low basis set levels. These conclusions hold both for molecular dimers and more general molecular clusters.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
A Self-Optimizing Scheme for Energy Balanced Routing in Wireless Sensor Networks Using SensorAnt
Shamsan Saleh, Ahmed M.; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency. PMID:23112658
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency. PMID:23112658
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
Poirier, Bill; Salam, A
2004-07-22
In this paper, we extend and elaborate upon a wavelet method first presented in a previous publication [B. Poirier, J. Theo. Comput. Chem. 2, 65 (2003)]. In particular, we focus on construction and optimization of the wavelet functions, from theoretical and numerical viewpoints, and also examine their localization properties. The wavelets used are modified Wilson-Daubechies wavelets, which in conjunction with a simple phase space truncation scheme, enable one to solve the multidimensional Schrodinger equation. This approach is ideally suited to rovibrational spectroscopy applications, but can be used in any context where differential equations are involved. PMID:15260720
An image fusion of quincunx sampling lifting scheme and small real-time DSP-based system
NASA Astrophysics Data System (ADS)
Wang, Qiang; Ni, Guoqiang; Chen, Bo
2008-03-01
An image fusion method using the quincunx sampling lifting wavelet transform combined with the fusion strategy of area edge change is put forward. Lifting wavelet transform can realize fast computation and no auxiliary memory, which could realize integral wavelet transform. Quincunx sampling adopts the scheme suitable for visual system and has the non-rectangle segmentation spectrum. Quincunx sampling lifting scheme, which is separable wavelet, combines both of their advantages. Furthermore, the fusion strategy of horizontal, vertical, diagonal edge change for low frequency image could reserve object integrality of source image. At the same time, the algorithm complexity and system Input/Output are calculated, after which the small integrated dual-spectral image fusion system with TMS320DM642 DSP as its kernel processor is then shown. As the hardware design of function, principle, structure and high speed circuit PCB design is presented, software design methods and implementation on this fusion platform are simultaneously introduced. The dual-spectral image real-time fusion system is built with high performance and small board dimensions, which lays a solid foundation for future applications.
Intentional sampling by goal optimization with decoupling by stochastic perturbation
NASA Astrophysics Data System (ADS)
Lauretto, Marcelo De Souza; Nakano, Fábio; Pereira, Carlos Alberto de Bragança; Stern, Julio Michael
2012-10-01
Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications.
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe test samples representing 9.5 square meters of surface area and the result of the analysis of the composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20...
Improved pKa calculations through flexibility based sampling of a water-dominated interaction scheme
Warwicker, Jim
2004-01-01
Ionizable groups play critical roles in biological processes. Computation of pKas is complicated by model approximations and multiple conformations. Calculated and experimental pKas are compared for relatively inflexible active-site side chains, to develop an empirical model for hydration entropy changes upon charge burial. The modification is found to be generally small, but large for cysteine, consistent with small molecule ionization data and with partial charge distributions in ionized and neutral forms. The hydration model predicts significant entropic contributions for ionizable residue burial, demonstrated for components in the pyruvate dehydrogenase complex. Conformational relaxation in a pH-titration is estimated with a mean-field assessment of maximal side chain solvent accessibility. All ionizable residues interact within a low protein dielectric finite difference (FD) scheme, and more flexible groups also access water-mediated Debye-Hückel (DH) interactions. The DH method tends to match overall pH-dependent stability, while FD can be more accurate for active-site groups. Tolerance for side chain rotamer packing is varied, defining access to DH interactions, and the best fit with experimental pKas obtained. The new (FD/DH) method provides a fast computational framework for making the distinction between buried and solvent-accessible groups that has been qualitatively apparent from previous work, and pKa calculations are significantly improved for a mixed set of ionizable residues. Its effectiveness is also demonstrated with computation of the pH-dependence of electrostatic energy, recovering favorable contributions to folded state stability and, in relation to structural genomics, with substantial improvement (reduction of false positives) in active-site identification by electrostatic strain. PMID:15388865
NASA Astrophysics Data System (ADS)
Brunner, Fabian; Radu, Florin A.; Bause, Markus; Knabner, Peter
2012-01-01
We present a mass conservative finite element approach of second order accuracy for the numerical approximation of reactive solute transport in porous media modeled by a coupled system of advection-diffusion-reaction equations. The lowest order Brezzi-Douglas-Marini ( BDM1) mixed finite element method is used. A modification based on the hybrid form of the approach is suggested for the discretization of the advective term. It is demonstrated numerically that this leads to optimal second order convergence of the flux variable. The modification improves the convergence behavior of the classical BDM1 scheme, which is known to be suboptimal of first order accuracy only for advection-diffusion problems; cf. [8]. Moreover, the new scheme shows more robustness for high Péclet numbers than the classical approach. A comparison with the Raviart-Thomas element ( RT1) of second order accuracy for the approximation of the flux variable is also presented. For the case of strongly advection-dominated problems we propose a full upwind scheme. Various numerical studies, including also a nonlinear test problem, are presented to illustrate the numerical performance properties of the considered numerical methods.
NASA Astrophysics Data System (ADS)
Castro Franco, Mauricio; Costa, Jose Luis; Aparicio, Virginia
2015-04-01
Digital soil mapping techniques can be used for improve soil information at field-scale. The aim of this study were develop a RF model to soil organic matter (SOM) and clay content in top soil at farm-scale combining predictors reduction and model-based soil-sampling techniques. We combine predictors reduce by factor analysis and model-based soil-sampling schemes by Conditioned Latin hypercube sampling (cLHS) and Fuzzy c-means sampling (FCMS). In general, 11 of 18 predictors were selected. Factor analysis provided an efficient quantitative method to determine the number of predictors. The combination of cLHS and predictors reduction with factor analysis was effective to predict SOM and clay content. Factors related with vegetation cover and yield map were the most important predictors to predict SOM and clay content, whereas factors related with topography were the less important. A dataset minimum of 50 soil samples were necessary to demonstrate the efficacy of the combination Factor Analysis-cLHS-RF model. The accuracy of the RF models to predict SOM and clay content can be maximized by increasing the number of samples. In this study, we demonstrated that the combination Factor Analysis-cLHS could reduce the time and financial resources need to improve the predictive capacity of RF models to predict soil properties.
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Density and level set-XFEM schemes for topology optimization of 3-D structures
NASA Astrophysics Data System (ADS)
Villanueva, Carlos H.; Maute, Kurt
2014-07-01
As the capabilities of additive manufacturing techniques increase, topology optimization provides a promising approach to design geometrically sophisticated structures. Traditional topology optimization methods aim at finding conceptual designs, but they often do not resolve sufficiently the geometry and the structural response such that the optimized designs can be directly used for manufacturing. To overcome these limitations, this paper studies the viability of the extended finite element method (XFEM) in combination with the level-set method (LSM) for topology optimization of three dimensional structures. The LSM describes the geometry by defining the nodal level set values via explicit functions of the optimization variables. The structural response is predicted by a generalized version of the XFEM. The LSM-XFEM approach is compared against results from a traditional Solid Isotropic Material with Penalization method for two-phase "solid-void" and "solid-solid" problems. The numerical results demonstrate that the LSM-XFEM approach describes crisply the geometry and predicts the structural response with acceptable accuracy even on coarse meshes.
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
NASA Astrophysics Data System (ADS)
Kala, J.; De Kauwe, M. G.; Pitman, A. J.; Lorenz, R.; Medlyn, B. E.; Wang, Y.-P.; Lin, Y.-S.; Abramowitz, G.
2015-12-01
We implement a new stomatal conductance scheme, based on the optimality approach, within the Community Atmosphere Biosphere Land Exchange (CABLEv2.0.1) land surface model. Coupled land-atmosphere simulations are then performed using CABLEv2.0.1 within the Australian Community Climate and Earth Systems Simulator (ACCESSv1.3b) with prescribed sea surface temperatures. As in most land surface models, the default stomatal conductance scheme only accounts for differences in model parameters in relation to the photosynthetic pathway but not in relation to plant functional types. The new scheme allows model parameters to vary by plant functional type, based on a global synthesis of observations of stomatal conductance under different climate regimes over a wide range of species. We show that the new scheme reduces the latent heat flux from the land surface over the boreal forests during the Northern Hemisphere summer by 0.5-1.0 mm day-1. This leads to warmer daily maximum and minimum temperatures by up to 1.0 °C and warmer extreme maximum temperatures by up to 1.5 °C. These changes generally improve the climate model's climatology of warm extremes and improve existing biases by 10-20 %. The bias in minimum temperatures is however degraded but, overall, this is outweighed by the improvement in maximum temperatures as there is a net improvement in the diurnal temperature range in this region. In other regions such as parts of South and North America where ACCESSv1.3b has known large positive biases in both maximum and minimum temperatures (~ 5 to 10 °C), the new scheme degrades this bias by up to 1 °C. We conclude that, although several large biases remain in ACCESSv1.3b for temperature extremes, the improvements in the global climate model over large parts of the boreal forests during the Northern Hemisphere summer which result from the new stomatal scheme, constrained by a global synthesis of experimental data, provide a valuable advance in the long-term development
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G
2016-08-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-09-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-10-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Ant colony optimization as a method for strategic genotype sampling.
Spangler, M L; Robbins, K R; Bertrand, J K; Macneil, M; Rekaya, R
2009-06-01
A simulation study was carried out to develop an alternative method of selecting animals to be genotyped. Simulated pedigrees included 5000 animals, each assigned genotypes for a bi-allelic single nucleotide polymorphism (SNP) based on assumed allelic frequencies of 0.7/0.3 and 0.5/0.5. In addition to simulated pedigrees, two beef cattle pedigrees, one from field data and the other from a research population, were used to test selected methods using simulated genotypes. The proposed method of ant colony optimization (ACO) was evaluated based on the number of alleles correctly assigned to ungenotyped animals (AK(P)), the probability of assigning true alleles (AK(G)) and the probability of correctly assigning genotypes (APTG). The proposed animal selection method of ant colony optimization was compared to selection using the diagonal elements of the inverse of the relationship matrix (A(-1)). Comparisons of these two methods showed that ACO yielded an increase in AK(P) ranging from 4.98% to 5.16% and an increase in APTG from 1.6% to 1.8% using simulated pedigrees. Gains in field data and research pedigrees were slightly lower. These results suggest that ACO can provide a better genotyping strategy, when compared to A(-1), with different pedigree sizes and structures. PMID:19220227
A Sample Time Optimization Problem in a Digital Control System
NASA Astrophysics Data System (ADS)
Mitkowski, Wojciech; Oprzędkiewicz, Krzysztof
In the paper a phenomenon of the existence of a sample time minimizing the settling time in a digital control system is described. As a control plant an experimental heat object was used. The control system was built with the use of a soft PLC system SIEMENS SIMATIC. As the control algorithm a finite dimensional dynamic compensator was applied. During tests of the control system it was observed that there exists a value of the sample time which minimizes the settling time in the system. This phenomenon is tried to explain.
Optimization of strawberry volatile sampling by odor representativeness
Technology Transfer Automated Retrieval System (TEKTRAN)
The aim of this work was to choose a suitable sampling headspace technique to study 'Festival' aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...
Optimization of strawberry volatile sampling by direct gas chromatography olfactometry
Technology Transfer Automated Retrieval System (TEKTRAN)
The aim of this work was to choose a suitable sampling headspace technique to study ‘Festival’ aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...
Pareto-optimal clustering scheme using data aggregation for wireless sensor networks
NASA Astrophysics Data System (ADS)
Azad, Puneet; Sharma, Vidushi
2015-07-01
The presence of cluster heads (CHs) in a clustered wireless sensor network (WSN) leads to improved data aggregation and enhanced network lifetime. Thus, the selection of appropriate CHs in WSNs is a challenging task, which needs to be addressed. A multicriterion decision-making approach for the selection of CHs is presented using Pareto-optimal theory and technique for order preference by similarity to ideal solution (TOPSIS) methods. CHs are selected using three criteria including energy, cluster density and distance from the sink. The overall network lifetime in this method with 50% data aggregation after simulations is 81% higher than that of distributed hierarchical agglomerative clustering in similar environment and with same set of parameters. Optimum number of clusters is estimated using TOPSIS technique and found to be 9-11 for effective energy usage in WSNs.
A neighboring optimal feedback control scheme for systems using discontinuous control.
NASA Technical Reports Server (NTRS)
Foerster, R. E.; Flugge-Lotz, I.
1971-01-01
The calculation and implementation of the neighboring optimal feedback control law for multiinput, nonlinear dynamical systems, using discontinuous control, is discussed. An initialization procedure is described which removes the requirement that the neighboring initial state be in the neighborhood of the nominal initial state. This procedure is a bootstrap technique for determining the most appropriate control-law gain for the neighboring initial state. The mechanization of the neighboring control law described is closed loop in that the concept of time-to-go is utilized in the determination of the control-law gains appropriate for each neighboring state. The gains are chosen such that the time-to-go until the next predicted switch time or predicted final time is the same for both the neighboring and nominal trajectories. The procedure described is utilized to solve the minimum-time satellite attitude-acquisition problem.
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Determination and optimization of spatial samples for distributed measurements.
Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong
2010-10-01
There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.
Optimizing analog-to-digital converters for sampling extracellular potentials.
Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan
2012-01-01
In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy. PMID:23366227
Statistically optimal analysis of samples from multiple equilibrium states
Shirts, Michael R.; Chodera, John D.
2008-01-01
We present a new estimator for computing free energy differences and thermodynamic expectations as well as their uncertainties from samples obtained from multiple equilibrium states via either simulation or experiment. The estimator, which we call the multistate Bennett acceptance ratio estimator (MBAR) because it reduces to the Bennett acceptance ratio estimator (BAR) when only two states are considered, has significant advantages over multiple histogram reweighting methods for combining data from multiple states. It does not require the sampled energy range to be discretized to produce histograms, eliminating bias due to energy binning and significantly reducing the time complexity of computing a solution to the estimating equations in many cases. Additionally, an estimate of the statistical uncertainty is provided for all estimated quantities. In the large sample limit, MBAR is unbiased and has the lowest variance of any known estimator for making use of equilibrium data collected from multiple states. We illustrate this method by producing a highly precise estimate of the potential of mean force for a DNA hairpin system, combining data from multiple optical tweezer measurements under constant force bias. PMID:19045004
[Study on the optimization methods of common-batch identification of amphetamine samples].
Zhang, Jianxin; Zhang, Daming
2008-07-01
The essay introduced the technology of amphetamine identification and its optimization method. Impurity profiling of amphetamine was analyzed by GC-MS. Identification of common-batch amphetamine samples could be successfully finished by the data transition and pre-treating of the peak areas. The analytical method was improved by optimizing the techniques of sample extraction, gas chromatograph, sample separation and detection. PMID:18839544
Optimal Short-Time Acquisition Schemes in High Angular Resolution Diffusion-Weighted Imaging
Prčkovska, V.; Achterberg, H. C.; Bastiani, M.; Pullens, P.; Balmashnova, E.; ter Haar Romeny, B. M.; Vilanova, A.; Roebroeck, A.
2013-01-01
This work investigates the possibilities of applying high-angular-resolution-diffusion-imaging- (HARDI-) based methods in a clinical setting by investigating the performance of non-Gaussian diffusion probability density function (PDF) estimation for a range of b-values and diffusion gradient direction tables. It does so at realistic SNR levels achievable in limited time on a high-performance 3T system for the whole human brain in vivo. We use both computational simulations and in vivo brain scans to quantify the angular resolution of two selected reconstruction methods: Q-ball imaging and the diffusion orientation transform. We propose a new analytical solution to the ODF derived from the DOT. Both techniques are analytical decomposition approaches that require identical acquisition and modest postprocessing times and, given the proposed modifications of the DOT, can be analyzed in a similar fashion. We find that an optimal HARDI protocol given a stringent time constraint (<10 min) combines a moderate b-value (around 2000 s/mm2) with a relatively low number of acquired directions (>48). Our findings generalize to other methods and additional improvements in MR acquisition techniques. PMID:23554808
Alleviating Linear Ecological Bias and Optimal Design with Sub-sample Data
Glynn, Adam; Wakefield, Jon; Handcock, Mark S.; Richardson, Thomas S.
2009-01-01
Summary In this paper, we illustrate that combining ecological data with subsample data in situations in which a linear model is appropriate provides three main benefits. First, by including the individual level subsample data, the biases associated with linear ecological inference can be eliminated. Second, by supplementing the subsample data with ecological data, the information about parameters will be increased. Third, we can use readily available ecological data to design optimal subsampling schemes, so as to further increase the information about parameters. We present an application of this methodology to the classic problem of estimating the effect of a college degree on wages. We show that combining ecological data with subsample data provides precise estimates of this value, and that optimal subsampling schemes (conditional on the ecological data) can provide good precision with only a fraction of the observations. PMID:20052294
Optimized Sampling Strategies For Non-Proliferation Monitoring: Report
Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.
2015-10-20
Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.
Optimal sampling of visual information for lightness judgments
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2013-01-01
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
Optimal sampling of visual information for lightness judgments.
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R
2013-07-01
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251
Optimizing fish sampling for fish-mercury bioaccumulation factors.
Scudder Eikenberry, Barbara C; Riva-Murray, Karen; Knightes, Christopher D; Journey, Celeste A; Chasar, Lia C; Brigham, Mark E; Bradley, Paul M
2015-09-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop total maximum daily load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish-sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements. PMID:25592462
Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz
2014-01-01
Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636
Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz
2014-12-01
Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively, and cluster analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments, and Vaqueros and La Caldera rivers were the most similar. Canonical correlation analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and principal component analysis allowed finding relationships among the nine measured variables in all aquatic environments. Variable's loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros rivers were influenced by recreational activities. Discriminant analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Liu, Jin-Nan; Chen, Yen-Lin; Chang, Wen-Yen; Hsieh, Tung-Ju; Huang, Bormin
2014-01-01
In recent years, satellite imaging technologies have resulted in an increased number of bands acquired by hyperspectral sensors, greatly advancing the field of remote sensing. Accordingly, owing to the increasing number of bands, band selection in hyperspectral imagery for dimension reduction is important. This paper presents a framework for band selection in hyperspectral imagery that uses two techniques, referred to as particle swarm optimization (PSO) band selection and the impurity function band prioritization (IFBP) method. With the PSO band selection algorithm, highly correlated bands of hyperspectral imagery can first be grouped into modules to coarsely reduce high-dimensional datasets. Then, these highly correlated band modules are analyzed with the IFBP method to finely select the most important feature bands from the hyperspectral imagery dataset. However, PSO band selection is a time-consuming procedure when the number of hyperspectral bands is very large. Hence, this paper proposes a parallel computing version of PSO, namely parallel PSO (PPSO), using a modern graphics processing unit (GPU) architecture with NVIDIA's compute unified device architecture technology to improve the computational speed of PSO processes. The natural parallelism of the proposed PPSO lies in the fact that each particle can be regarded as an independent agent. Parallel computation benefits the algorithm by providing each agent with a parallel processor. The intrinsic parallel characteristics embedded in PPSO are, therefore, suitable for parallel computation. The effectiveness of the proposed PPSO is evaluated through the use of airborne visible/infrared imaging spectrometer hyperspectral images. The performance of PPSO is validated using the supervised K-nearest neighbor classifier. The experimental results demonstrate that the proposed PPSO/IFBP band selection method can not only improve computational speed, but also offer a satisfactory classification performance.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Fast and Statistically Optimal Period Search in Uneven Sampled Observations
NASA Astrophysics Data System (ADS)
Schwarzenberg-Czerny, A.
1996-04-01
The classical methods for searching for a periodicity in uneven sampled observations suffer from a poor match of the model and true signals and/or use of a statistic with poor properties. We present a new method employing periodic orthogonal polynomials to fit the observations and the analysis of variance (ANOVA) statistic to evaluate the quality of the fit. The orthogonal polynomials constitute a flexible and numerically efficient model of the observations. Among all popular statistics, ANOVA has optimum detection properties as the uniformly most powerful test. Our recurrence algorithm for expansion of the observations into the orthogonal polynomials is fast and numerically stable. The expansion is equivalent to an expansion into Fourier series. Aside from its use of an inefficient statistic, the Lomb-Scargle power spectrum can be considered a special case of our method. Tests of our new method on simulated and real light curves of nonsinusoidal pulsators demonstrate its excellent performance. In particular, dramatic improvements are gained in detection sensitivity and in the damping of alias periods.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136
Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873
Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873
Bai, Fang; Liao, Sha; Gu, Junfeng; Jiang, Hualiang; Wang, Xicheng; Li, Honglin
2015-04-27
Metalloproteins, particularly zinc metalloproteins, are promising therapeutic targets, and recent efforts have focused on the identification of potent and selective inhibitors of these proteins. However, the ability of current drug discovery and design technologies, such as molecular docking and molecular dynamics simulations, to probe metal-ligand interactions remains limited because of their complicated coordination geometries and rough treatment in current force fields. Herein we introduce a robust, multiobjective optimization algorithm-driven metalloprotein-specific docking program named MpSDock, which runs on a scheme similar to consensus scoring consisting of a force-field-based scoring function and a knowledge-based scoring function. For this purpose, in this study, an effective knowledge-based zinc metalloprotein-specific scoring function based on the inverse Boltzmann law was designed and optimized using a dynamic sampling and iteration optimization strategy. This optimization strategy can dynamically sample and regenerate decoy poses used in each iteration step of refining the scoring function, thus dramatically improving both the effectiveness of the exploration of the binding conformational space and the sensitivity of the ranking of the native binding poses. To validate the zinc metalloprotein-specific scoring function and its special built-in docking program, denoted MpSDockZn, an extensive comparison was performed against six universal, popular docking programs: Glide XP mode, Glide SP mode, Gold, AutoDock, AutoDock4Zn, and EADock DSS. The zinc metalloprotein-specific knowledge-based scoring function exhibited prominent performance in accurately describing the geometries and interactions of the coordination bonds between the zinc ions and chelating agents of the ligands. In addition, MpSDockZn had a competitive ability to sample and identify native binding poses with a higher success rate than the other six docking programs. PMID:25746437
Piao, Xinglin; Hu, Yongli; Sun, Yanfeng; Yin, Baocai; Gao, Junbin
2014-01-01
The emerging low rank matrix approximation (LRMA) method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs) by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability. PMID:25490583
NASA Astrophysics Data System (ADS)
Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon
Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)
Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P
2015-07-01
Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates. PMID:25454561
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs. PMID:26316105
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Optimal sampling efficiency in Monte Carlo sampling with an approximate potential
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Astrophysics Data System (ADS)
Shiau, Jenq-Tzong; Wu, Fu-Chun
2007-06-01
The temporal variations of natural flows are essential elements for preserving the ecological health of a river which are addressed in this paper by the environmental flow schemes that incorporate the intra-annual and interannual variability of the natural flow regime. We present an optimization framework to find the Pareto-optimal solutions for various flow schemes. The proposed framework integrates (1) the range of variability approach for evaluating the hydrologic alterations; (2) the standardized precipitation index approach for establishing the variation criteria for the wet, normal, and dry years; (3) a weir operation model for simulating the system of flows; and (4) a multiobjective optimization genetic algorithm for search of the Pareto-optimal solutions. The proposed framework is applied to the Kaoping diversion weir in Taiwan. The results reveal that the time-varying schemes incorporating the intra-annual variability in the environmental flow prescriptions promote the ecosystem and human needs fitness. Incorporation of the interannual flow variability using different criteria established for three types of water year further promotes both fitnesses. The merit of incorporating the interannual variability may be superimposed on that of incorporating only the intra-annual flow variability. The Pareto-optimal solutions searched with a limited range of flows replicate satisfactorily those obtained with a full search range. The limited-range Pareto front may be used as a surrogate of the full-range one if feasible prescriptions are to be found among the regular flows.
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Protocol for optimal quality and quantity pollen DNA isolation from honey samples.
Lalhmangaihi, Ralte; Ghatak, Souvik; Laha, Ramachandra; Gurusubramanian, Guruswami; Kumar, Nachimuthu Senthil
2014-12-01
The present study illustrates an optimized sample preparation method for an efficient DNA isolation from low quantities of honey samples. A conventional PCR-based method was validated, which potentially enables characterization of plant species from as low as 3 ml bee-honey samples. In the present study, an anionic detergent was used to lyse the hard outer pollen shell, and DTT was used for isolation of thiolated DNA, as it might facilitate protein digestion and assists in releasing the DNA into solution, as well as reduce cross-links between DNA and other biomolecules. Optimization of both the quantity of honey sample and time duration for DNA isolation was done during development of this method. With the use of this method, chloroplast DNA was successfully PCR amplified and sequenced from honey DNA samples. PMID:25365793
Optimal number of samples to test for institutional respiratory infection outbreaks in Ontario.
Peci, A; Marchand-Austin, A; Winter, A-L; Winter, A-J; Gubbay, J B
2013-08-01
The objective of this study was to determine the optimal number of respiratory samples per outbreak to be tested for institutional respiratory outbreaks in Ontario. We reviewed respiratory samples tested for respiratory viruses by multiplex PCR as part of outbreak investigations. We documented outbreaks that were positive for any respiratory viruses and for influenza alone. At least one virus was detected in 1454 (85∙2%) outbreaks. The ability to detect influenza or any respiratory virus increased as the number of samples tested increased. When analysed by chronological order of when samples were received at the laboratory, percent positivity of outbreaks testing positive for any respiratory virus including influenza increased with the number of samples tested up to the ninth sample, with minimal benefit beyond the fourth sample tested. Testing up to four respiratory samples per outbreak was sufficient to detect viral organisms and resulted in significant savings for outbreak investigations. PMID:23146341
Vandermeulen, Eva; De Sadeleer, Carlos; Piepsz, Amy; Ham, Hamphrey R; Dobbeleir, André A; Vermeire, Simon T; Van Hoek, Ingrid M; Daminet, Sylvie; Slegers, Guido; Peremans, Kathelijne Y
2010-08-01
Estimation of the glomerular filtration rate (GFR) is a useful tool in the evaluation of kidney function in feline medicine. GFR can be determined by measuring the rate of tracer disappearance from the blood, and although these measurements are generally performed by multi-sampling techniques, simplified methods are more convenient in clinical practice. The optimal times for a simplified sampling strategy with two blood samples (2BS) for GFR measurement in cats using plasma (51)chromium ethylene diamine tetra-acetic acid ((51)Cr-EDTA) clearance were investigated. After intravenous administration of (51)Cr-EDTA, seven blood samples were obtained in 46 cats (19 euthyroid and 27 hyperthyroid cats, none with previously diagnosed chronic kidney disease (CKD)). The plasma clearance was then calculated from the seven point blood kinetics (7BS) and used for comparison to define the optimal sampling strategy by correlating different pairs of time points to the reference method. Mean GFR estimation for the reference method was 3.7+/-2.5 ml/min/kg (mean+/-standard deviation (SD)). Several pairs of sampling times were highly correlated with this reference method (r(2) > or = 0.980), with the best results when the first sample was taken 30 min after tracer injection and the second sample between 198 and 222 min after injection; or with the first sample at 36 min and the second at 234 or 240 min (r(2) for both combinations=0.984). Because of the similarity of GFR values obtained with the 2BS method in comparison to the values obtained with the 7BS reference method, the simplified method may offer an alternative for GFR estimation. Although a wide range of GFR values was found in the included group of cats, the applicability should be confirmed in cats suspected of renal disease and with confirmed CKD. Furthermore, although no indications of age-related effect were found in this study, a possible influence of age should be included in future studies. PMID:20452793
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Hatjimihail, Aristides T.
2009-01-01
Background An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Methodology/Principal Findings Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. Conclusions/Significance It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed. PMID:19513124
NASA Astrophysics Data System (ADS)
Bisceglia, E.; Cubizolles, M.; Mallard, F.; Pineda, F.; Francais, O.; Le Pioufle, B.
2013-05-01
Sample preparation is a key issue of modern analytical methods for in vitro diagnostics of diseases with microbiological origins: methods to separate bacteria from other elements of the complex biological samples are of great importance. In the present study, we investigated the DEP force as a way to perform such a de-complexification of the sample by extracting micro-organisms from a complex biological sample under a highly non-uniform electric field in a micro-system based on an interdigitated electrodes array. Different parameters were investigated to optimize the capture efficiency, such as the size of the gap between the electrodes and the height of the capture channel. These parameters are decisive for the distribution of the electric field inside the separation chamber. To optimize these relevant parameters, we performed numerical simulations using COMSOL Multiphysics and correlated them with experimental results. The optimization of the capture efficiency of the device has first been tested on micro-organisms solution but was also investigated on human blood samples spiked with micro-organisms, thereby mimicking real biological samples.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-01
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717
Shen, Xiong; Zong, Chao; Zhang, Guoqiang
2012-01-01
Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.
Ejnik, J W; Hamilton, M M; Adams, P R; Carmichael, A J
2000-12-15
Kinetic phosphorescence analysis (KPA) is a proven technique for rapid, precise, and accurate determination of uranium in aqueous solutions. Uranium analysis of biological samples require dry-ashing in a muffle furnace between 400 and 600 degrees C followed by wet-ashing with concentrated nitric acid and hydrogen peroxide to digest the organic component in the sample that interferes with uranium determination by KPA. The optimal dry-ashing temperature was determined to be 450 degrees C. At dry-ashing temperatures greater than 450 degrees C, uranium loss was attributed to vaporization. High temperatures also caused increased background values that were attributed to uranium leaching from the glass vials. Dry-ashing temperatures less than 450 degrees C result in the samples needing additional wet-ashing steps. The recovery of uranium in urine samples was 99.2+/-4.02% between spiked concentrations of 1.98-1980 ng (0.198-198 microg l(-1)) uranium, whereas the recovery in whole blood was 89.9+/-7.33% between the same spiked concentrations. The limit of quantification in which uranium in urine and blood could be accurately measured above the background was determined to be 0.05 and 0.6 microg l(-1), respectively. PMID:11130202
Optimization of low-background alpha spectrometers for analysis of thick samples.
Misiaszek, M; Pelczar, K; Wójcik, M; Zuzel, G; Laubenstein, M
2013-11-01
Results of alpha spectrometric measurements performed deep underground and above ground with and without active veto show that the underground measurement of thick samples is the most sensitive method due to significant reduction of the muon-induced background. In addition, the polonium diffusion requires for some samples an appropriate selection of an energy region in the registered spectrum. On the basis of computer simulations the best counting conditions are selected for a thick lead sample in order to optimize the detection limit. PMID:23628514
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
NASA Astrophysics Data System (ADS)
Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal
Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.
Sampling design optimization for multivariate soil mapping, case study from Hungary
NASA Astrophysics Data System (ADS)
Szatmári, Gábor; Pásztor, László; Barta, Károly
2014-05-01
Direct observations of the soil are important for two main reasons in Digital Soil Mapping (DSM). First, they are used to characterize the relationship between the soil property of interest and the auxiliary information. Second, they are used to improve the predictions based on the auxiliary information. Hence there is a strong necessity to elaborate a well-established soil sampling strategy based on geostatistical tools, prior knowledge and available resources before the samples are actually collected from the area of interest. Fieldwork and laboratory analyses are the most expensive and labor-intensive part of DSM, meanwhile the collected samples and the measured data have a remarkable influence on the spatial predictions and their uncertainty. Numerous sampling strategy optimization techniques developed in the past decades. One of these optimization techniques is Spatial Simulated Annealing (SSA) that has been frequently used in soil surveys to minimize the average universal kriging variance. The benefit of the technique is, that the surveyor can optimize the sampling design for fixed number of observations taking auxiliary information, previously collected samples and inaccessible areas into account. The requirements are the known form of the regression model and the spatial structure of the residuals of the model. Another restriction is, that the technique is able to optimize the sampling design for just one target soil variable. However, in practice a soil survey usually aims to describe the spatial distribution of not just one but several pedological variables. In the recent paper we present a procedure developed in R-code to simultaneously optimize the sampling design by SSA for two soil variables using spatially averaged universal kriging variance as optimization criterion. Soil Organic Matter (SOM) content and rooting depth were chosen for this purpose. The methodology is illustrated with a legacy data set from a study area in Central Hungary. Legacy soil
Ivanov, A.; Sanchez, V.; Imke, U.; Ivanov, K.
2012-07-01
In order to increase the accuracy and the degree of spatial resolution of core design studies, coupled Three-Dimensional (3D) neutronics (deterministic and Monte Carlo) and 3D thermal hydraulics (CFD and sub-channel) codes are being developed worldwide. In this paper the optimization of a coupling between MCNP5 code and an in-house development thermal-hydraulics code SUBCHANFLOW is presented. Various improvements of the coupling methodology are presented. With the help of novel interpolation tool a consistent methodology for the preparation of thermal scattering data library have been developed, ensuring that inelastic scattering from bound nuclei is treated at the correct moderator temperature. Trough the utilization of a hybrid coupling with discrete energy Monte-Carlo code KENO a methodology for acceleration of the coupled calculation is being demonstrated. In this approach an additional coupling between KENO and SUBCHANFLOW was developed, the converged results of which are used as initial conditions for the MCNP-SUBCHANFLOW coupling. Acceleration of fission source distribution convergence, by sampling fission source distribution from the power distribution obtained by KENO is also demonstrated. (authors)
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
A method to optimize sampling locations for measuring indoor air distributions
NASA Astrophysics Data System (ADS)
Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan
2015-02-01
Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.
Optimizing Spatio-Temporal Sampling Designs of Synchronous, Static, or Clustered Measurements
NASA Astrophysics Data System (ADS)
Helle, Kristina; Pebesma, Edzer
2010-05-01
When sampling spatio-temporal random variables, the cost of a measurement may differ according to the setup of the whole sampling design: static measurements, i.e. repeated measurements at the same location, synchronous measurements or clustered measurements may be cheaper per measurement than completely individual sampling. Such "grouped" measurements may however not be as good as individually chosen ones because of redundancy. Often, the overall cost rather than the total number of measurements is fixed. A sampling design with grouped measurements may allow for a larger number of measurements thus outweighing the drawback of redundancy. The focus of this paper is to include the tradeoff between the number of measurements and the freedom of their location in sampling design optimisation. For simple cases, optimal sampling designs may be fully determined. To predict e.g. the mean over a spatio-temporal field having known covariance, the optimal sampling design often is a grid with density determined by the sampling costs [1, Ch. 15]. For arbitrary objective functions sampling designs can be optimised relocating single measurements, e.g. by Spatial Simulated Annealing [2]. However, this does not allow to take advantage of lower costs when using grouped measurements. We introduce a heuristic that optimises an arbitrary objective function of sampling designs, including static, synchronous, or clustered measurements, to obtain better results at a given sampling budget. Given the cost for a measurement, either within a group or individually, the algorithm first computes affordable sampling design configurations. The number of individual measurements as well as kind and number of grouped measurements are determined. Random locations and dates are assigned to the measurements. Spatial Simulated Annealing is used on each of these initial sampling designs (in parallel) to improve them. In grouped measurements either the whole group is moved or single measurements within the
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
NASA Astrophysics Data System (ADS)
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
Optimizing Diagnostic Yield for EUS-Guided Sampling of Solid Pancreatic Lesions: A Technical Review
Weston, Brian R.
2013-01-01
Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has a higher diagnostic accuracy for pancreatic cancer than other techniques. This article will review the current advances and considerations for optimizing diagnostic yield for EUS-guided sampling of solid pancreatic lesions. Preprocedural considerations include patient history, confirmation of appropriate indication, review of imaging, method of sedation, experience required by the endoscopist, and access to rapid on-site cytologic evaluation. New EUS imaging techniques that may assist with differential diagnoses include contrast-enhanced harmonic EUS, EUS elastography, and EUS spectrum analysis. FNA techniques vary, and multiple FNA needles are now commercially available; however, neither techniques nor available FNA needles have been definitively compared. The need for suction depends on the lesion, and the need for a stylet is equivocal. No definitive endosonographic finding can predict the optimal number of passes for diagnostic yield. Preparation of good smears and communication with the cytopathologist are essential to optimize yield. PMID:23935542
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no
NASA Astrophysics Data System (ADS)
Tavakoli, Rouhollah
2016-01-01
An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.
Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples
Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-01
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
Sampling optimization, at site scale, in contamination monitoring with moss, pine and oak.
Aboal, J R; Fernández, J A; Carballeira, A
2001-01-01
With the aim of optimizing protocols for sampling moss, pine and oak for biomonitoring of atmospheric contamination and also for inclusion in an Environmental Specimen Bank, 50 sampling units of each species were collected from the study area for individual analysis. Levels of Ca, Cu, Fe, Hg, Ni, and Zn in the plants were determined and the distributions of the concentrations studied. In moss samples, the concentrations of Cu, Ni and Zn, considered to be trace pollutants in this species, showed highly variable long-normal distributions; in pine and oak samples only Ni concentrations were log-normally distributed. In addition to analytical error, the two main source of error found to be associated with making a collective sample were: (1) not carrying out measurements on individual sampling units; and (2) the number of sampling units collected and the corresponding sources of variation (microspatial, age and interindividual). We recommend that a minimum of 30 sampling units are collected when contamination is suspected. PMID:11706804
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki
2015-11-15
Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
Straver, Roy; Sistermans, Erik A.; Holstege, Henne; Visser, Allerdien; Oudejans, Cees B. M.; Reinders, Marcel J. T.
2014-01-01
Genetic disorders can be detected by prenatal diagnosis using Chorionic Villus Sampling, but the 1:100 chance to result in miscarriage restricts the use to fetuses that are suspected to have an aberration. Detection of trisomy 21 cases noninvasively is now possible owing to the upswing of next-generation sequencing (NGS) because a small percentage of fetal DNA is present in maternal plasma. However, detecting other trisomies and smaller aberrations can only be realized using high-coverage NGS, making it too expensive for routine practice. We present a method, WISECONDOR (WIthin-SamplE COpy Number aberration DetectOR), which detects small aberrations using low-coverage NGS. The increased detection resolution was achieved by comparing read counts within the tested sample of each genomic region with regions on other chromosomes that behave similarly in control samples. This within-sample comparison avoids the need to re-sequence control samples. WISECONDOR correctly identified all T13, T18 and T21 cases while coverages were as low as 0.15–1.66. No false positives were identified. Moreover, WISECONDOR also identified smaller aberrations, down to 20 Mb, such as del(13)(q12.3q14.3), +i(12)(p10) and i(18)(q10). This shows that prevalent fetal copy number aberrations can be detected accurately and affordably by shallow sequencing maternal plasma. WISECONDOR is available at bioinformatics.tudelft.nl/wisecondor. PMID:24170809
Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P
2014-10-01
Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419
JR Bontha; GR Golcar; N Hannigan
2000-08-29
The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Ayad, G.; Barriere, T.; Gelin, J. C.; Liu, B.
2007-05-17
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
NASA Astrophysics Data System (ADS)
Agarwal, R. K.; Zhang, Z.; Zhu, C.
2013-12-01
For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the
Morley, Shannon M.; Seiner, Brienne N.; Finn, Erin C.; Greenwood, Lawrence R.; Smith, Steven C.; Gregory, Stephanie J.; Haney, Morgan M.; Lucas, Dawn D.; Arrigo, Leah M.; Beacham, Tere A.; Swearingen, Kevin J.; Friese, Judah I.; Douglas, Matthew; Metz, Lori A.
2015-05-01
Mixed fission and activation materials resulting from various nuclear processes and events contain a wide range of isotopes for analysis spanning almost the entire periodic table. In some applications such as environmental monitoring, nuclear waste management, and national security a very limited amount of material is available for analysis and characterization so an integrated analysis scheme is needed to measure multiple radionuclides from one sample. This work describes the production of a complex synthetic sample containing fission products, activation products, and irradiated soil and determines the percent recovery of select isotopes through the integrated chemical separation scheme. Results were determined using gamma energy analysis of separated fractions and demonstrate high yields of Ag (76 ± 6%), Au (94 ± 7%), Cd (59 ± 2%), Co (93 ± 5%), Cs (88 ± 3%), Fe (62 ± 1%), Mn (70 ± 7%), Np (65 ± 5%), Sr (73 ± 2%) and Zn (72 ± 3%). Lower yields (< 25%) were measured for Ga, Ir, Sc, and W. Based on the results of this experiment, a complex synthetic sample can be prepared with low atom/fission ratios and isotopes of interest accurately and precisely measured following an integrated chemical separation method.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-01-01
In a practical classifier design problem the sample size is limited, and the available finite sample needs to be used both to design a classifier and to predict the classifier's performance for the true population. Since a larger sample is more representative of the population, it is advantageous to design the classifier with all the available cases, and to use a resampling technique for performance prediction. We conducted a Monte Carlo simulation study to compare the ability of different resampling techniques in predicting the performance of a neural network (NN) classifier designed with the available sample. We used the area under the receiver operating characteristic curve as the performance index for the NN classifier. We investigated resampling techniques based on the cross-validation, the leave-one-out method, and three different types of bootstrapping, namely, the ordinary, .632, and .632+ bootstrap. Our results indicated that, under the study conditions, there can be a large difference in the accuracy of the prediction obtained from different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited data set. PMID:18234468
Quality Control Methods for Optimal BCR-ABL1 Clinical Testing in Human Whole Blood Samples
Stanoszek, Lauren M.; Crawford, Erin L.; Blomquist, Thomas M.; Warns, Jessica A.; Willey, Paige F.S.; Willey, James C.
2014-01-01
Reliable breakpoint cluster region (BCR)–Abelson (ABL) 1 measurement is essential for optimal management of chronic myelogenous leukemia. There is a need to optimize quality control, sensitivity, and reliability of methods used to measure a major molecular response and/or treatment failure. The effects of room temperature storage time, different primers, and RNA input in the reverse transcription (RT) reaction on BCR-ABL1 and β-glucuronidase (GUSB) cDNA yield were assessed in whole blood samples mixed with K562 cells. BCR-ABL1 was measured relative to GUSB to control for sample loading, and each gene was measured relative to known numbers of respective internal standard molecules to control for variation in quality and quantity of reagents, thermal cycler conditions, and presence of PCR inhibitors. Clinical sample and reference material measurements with this test were concordant with results reported by other laboratories. BCR-ABL1 per 103 GUSB values were significantly reduced (P = 0.004) after 48-hour storage. Gene-specific primers yielded more BCR-ABL1 cDNA than random hexamers at each RNA input. In addition, increasing RNA inhibited the RT reaction with random hexamers but not with gene-specific primers. Consequently, the yield of BCR-ABL1 was higher with gene-specific RT primers at all RNA inputs tested, increasing to as much as 158-fold. We conclude that optimal measurement of BCR-ABL1 per 103 GUSB in whole blood is obtained when gene-specific primers are used in RT and samples are analyzed within 24 hours after blood collection. PMID:23541592
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Dynamic reconstruction of sub-sampled data using Optimal Mode Decomposition
NASA Astrophysics Data System (ADS)
Krol, Jakub; Wynn, Andrew
2015-11-01
The Nyquist-Shannon criterion indicates the sample rate necessary to identify information with particular frequency content from a dynamical system. However, in experimental applications such as the interrogation of a flow field using Particle Image Velocimetry (PIV), it may be expensive to obtain data at the desired temporal resolution. To address this problem, we propose a new approach to identify temporal information from undersampled data, using ideas from modal decomposition algorithms such as Dynamic Mode Decomposition (DMD) and Optimal Mode Decomposition (OMD). The novel method takes a vector-valued signal sampled at random time instances (but at Sub-Nyquist rate) and projects onto a low-order subspace. Subsequently, dynamical characteristics are approximated by iteratively approximating the flow evolution by a low order model and solving a certain convex optimization problem. Furthermore, it is shown that constraints may be added to the optimization problem to improve spatial resolution of missing data points. The methodology is demonstrated on two dynamical systems, a cylinder flow at Re = 60 and Kuramoto-Sivashinsky equation. In both cases the algorithm correctly identifies the characteristic frequencies and oscillatory structures present in the flow.
Model reduction algorithms for optimal control and importance sampling of diffusions
NASA Astrophysics Data System (ADS)
Hartmann, Carsten; Schütte, Christof; Zhang, Wei
2016-08-01
We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.
2016-06-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.
An S/H circuit with parasitics optimized for IF-sampling
NASA Astrophysics Data System (ADS)
Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue
2016-06-01
An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).
Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
NASA Astrophysics Data System (ADS)
Taylor, Ted L.; Makimura, Eri
2007-03-01
Micron Technology, Inc., explores the challenges of defining specific wafer sampling scenarios for users of multiple integrated metrology modules within a Tokyo Electron Limited (TEL) CLEAN TRACK TM LITHIUS TM. With the introduction of integrated metrology (IM) into the photolithography coater/developer, users are faced with the challenge of determining what type of data is required to collect to adequately monitor the photolithography tools and the manufacturing process. Photolithography coaters/developers have a metrology block that is capable of integrating three metrology modules into the standard wafer flow. Taking into account the complexity of multiple metrology modules and varying across-wafer sampling plans per metrology module, users must optimize the module wafer sampling to obtain their desired goals. Users must also understand the complexity of the coater/developer handling systems to deliver wafers to each module. Coater/developer systems typically process wafers sequentially through each module to ensure consistent processing. In these systems, the first wafer must process through a module before the next wafer can process through a module, and the first wafer must return to the cassette before the second wafer can return to the cassette. IM modules within this type of system can reduce throughput and limit flexible wafer selections. Finally, users must have the ability to select specific wafer samplings for each IM module. This case study explores how to optimize wafer sampling plans and how to identify limitations with the complexity of multiple integrated modules to ensure maximum metrology throughput without impact to the productivity of processing wafers through the photolithography cell (litho cell).
Optimization of Sample Site Selection Imaging for OSIRIS-REx Using Asteroid Surface Analog Images
NASA Astrophysics Data System (ADS)
Tanquary, Hannah E.; Sahr, Eric; Habib, Namrah; Hawley, Christopher; Weber, Nathan; Boynton, William V.; Kinney-Spano, Ellyne; Lauretta, Dante
2014-11-01
OSIRIS-REx will return a sample of regolith from the surface of asteroid 101955 Bennu. The mission will obtain high resolution images of the asteroid in order to create detailed maps which will satisfy multiple mission objectives. To select a site, we must (i) identify hazards to the spacecraft and (ii) characterize a number of candidate sites to determine the optimal location for sampling. To further characterize the site, a long-term science campaign will be undertaken to constrain the geologic properties. To satisfy these objectives, the distribution and size of blocks at the sample site and backup sample site must be determined. This will be accomplished through the creation of rock size frequency distribution maps. The primary goal of this study is to optimize the creation of these map products by assessing techniques for counting blocks on small bodies, and assessing the methods of analysis of the resulting data. We have produced a series of simulated surfaces of Bennu which have been imaged, and the images processed to simulate Polycam images during the Reconnaissance phase. These surface analog images allow us to explore a wide range of imaging conditions, both ideal and non-ideal. The images have been “degraded”, and are displayed as thumbnails representing the limits of Polycam resolution from an altitude of 225 meters. Specifically, this study addresses the mission requirement that the rock size frequency distribution of regolith grains < 2cm in longest dimension must be determined for the sample sites during Reconnaissance. To address this requirement, we focus on the range of available lighting angles. Varying illumination and phase angles in the simulated images, we can compare the size-frequency distributions calculated from the degraded images with the known size frequency distributions of the Bennu simulant material, and thus determine the optimum lighting conditions for satisfying the 2 cm requirement.
Quality assessment and optimization of purified protein samples: why and how?
Raynal, Bertrand; Lenormand, Pascal; Baron, Bruno; Hoos, Sylviane; England, Patrick
2014-01-01
Purified protein quality control is the final and critical check-point of any protein production process. Unfortunately, it is too often overlooked and performed hastily, resulting in irreproducible and misleading observations in downstream applications. In this review, we aim at proposing a simple-to-follow workflow based on an ensemble of widely available physico-chemical technologies, to assess sequentially the essential properties of any protein sample: purity and integrity, homogeneity and activity. Approaches are then suggested to optimize the homogeneity, time-stability and storage conditions of purified protein preparations, as well as methods to rapidly evaluate their reproducibility and lot-to-lot consistency. PMID:25547134
NASA Astrophysics Data System (ADS)
Mayer, Rulon R.; Waterman, James; Schuler, Jonathon; Scribner, Dean
2003-12-01
To achieve enhanced target discrimination, prototype three- band long wave infrared (LWIR) focal plane arrays (FPA) for missile defense applications have recently been constructed. The cutoff wavelengths, widths, and spectral overlap of the bands are critical parameters for the multicolor sensor design. Previous calculations for sensor design did not account for target and clutter spectral features in determining the optimal band characteristics. The considerable spectral overlap and correlation between the bands and attendant reduction in color contrast is another unexamined issue. To optimize and simulate the projected behavior of three-band sensors, this report examined a hyperspectral LWIR image cube. Our study starts with 30 bands of the LWIR spectra of three man-made targets and natural backgrounds that were binned to 3 bands using weighted band binning. This work achieves optimal binning by using a genetic algorithm approach and the target-to-clutter-ratio (TCR) as the optimization criterion. Another approach applies a genetic algorithm to maximize discrimination among the spectral reflectivities in the Non-conventional Exploitation Factors Data System (NEFDS) library. Each candidate band was weighted using a Fermi function to represent four interacting band edges for three- bands. It is found that choice of target can significantly influence the optimal choice of bands as expressed through the TCR and the Receiver Operator Characteristic curve. This study shows that whitening the image data prominently displays targets relative to backgrounds by increasing color contrast and also maintains color constancy. Three-color images are displayed by assigning red, green, blue colors directly to the whitened data set. Achieving constant colors of targets and backgrounds over time can greatly aid human viewers in the interpretation of the images and discriminate targets.
NASA Astrophysics Data System (ADS)
Mayer, Rulon R.; Waterman, James; Schuler, Jonathon; Scribner, Dean
2004-01-01
To achieve enhanced target discrimination, prototype three- band long wave infrared (LWIR) focal plane arrays (FPA) for missile defense applications have recently been constructed. The cutoff wavelengths, widths, and spectral overlap of the bands are critical parameters for the multicolor sensor design. Previous calculations for sensor design did not account for target and clutter spectral features in determining the optimal band characteristics. The considerable spectral overlap and correlation between the bands and attendant reduction in color contrast is another unexamined issue. To optimize and simulate the projected behavior of three-band sensors, this report examined a hyperspectral LWIR image cube. Our study starts with 30 bands of the LWIR spectra of three man-made targets and natural backgrounds that were binned to 3 bands using weighted band binning. This work achieves optimal binning by using a genetic algorithm approach and the target-to-clutter-ratio (TCR) as the optimization criterion. Another approach applies a genetic algorithm to maximize discrimination among the spectral reflectivities in the Non-conventional Exploitation Factors Data System (NEFDS) library. Each candidate band was weighted using a Fermi function to represent four interacting band edges for three- bands. It is found that choice of target can significantly influence the optimal choice of bands as expressed through the TCR and the Receiver Operator Characteristic curve. This study shows that whitening the image data prominently displays targets relative to backgrounds by increasing color contrast and also maintains color constancy. Three-color images are displayed by assigning red, green, blue colors directly to the whitened data set. Achieving constant colors of targets and backgrounds over time can greatly aid human viewers in the interpretation of the images and discriminate targets.
Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System
NASA Astrophysics Data System (ADS)
Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.
2011-09-01
Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.
Analysis and optimization of bulk DNA sampling with binary scoring for germplasm characterization.
Reyes-Valdés, M Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization
Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
NASA Astrophysics Data System (ADS)
Fındık, Oğuz; Babaoğlu, İsmail; Ülker, Erkan
2010-12-01
In this paper, a novel robust watermarking technique using particle swarm optimization and k-nearest neighbor algorithm is introduced to protect the intellectual property rights of color images in the spatial domain. In the embedding process, the color image is separated into non-overlapping blocks and each bit of the binary watermark is embedded into the individual blocks. Then, in order to extract the embedded watermark, features are obtained from watermark embedded blocks using the symmetric cross-shape kernel. These features are used to generate two centroids belonging to each binary (1 and 0) value of the watermark implementing particle swarm optimization. Subsequently, the embedded watermark is extracted by evaluating these centroids utilizing k-nearest neighbor algorithm. According to the test results, embedded watermark is extracted successfully even if the watermarked image is exposed to various image processing attacks.
Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang
2016-08-01
Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA. PMID:26988503
Optimization of multi-channel neutron focusing guides for extreme sample environments
NASA Astrophysics Data System (ADS)
Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.
2014-07-01
In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.
Optimization of a pre-MEKC separation SPE procedure for steroid molecules in human urine samples.
Olędzka, Ilona; Kowalski, Piotr; Dziomba, Szymon; Szmudanowski, Piotr; Bączek, Tomasz
2013-01-01
Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE) with dichloromethane and compared to solid phase extraction (SPE) with C18 and hydrophilic-lipophilic balance (HLB) columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK) technique was employed. For full separation of all the analytes a running buffer (pH 9.2), composed of 10 mM sodium tetraborate decahydrate (borax), 50 mM sodium dodecyl sulfate (SDS), and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers--both men and women (students, amateur bodybuilders, using and not applying steroid doping). The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples. PMID:24232737
Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.
Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène
2016-06-01
Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. PMID:26998570
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Brunelli, Davide; Caione, Carlo
2015-01-01
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203
Pianciola, L; Mazzeo, M; Flores, D; Hozbor, D
2010-01-01
Pertussis or whooping cough is an acute, highly contagious respiratory infection, which is particularly severe in infants under one year old. In classic disease, clinical diagnosis may present no difficulties. In other cases, it requires laboratory confirmation. Generally used methods are: culture, serology and PCR. For the latter, the sample of choice is a nasopharyngeal aspirate, and the simplest method for processing these samples uses proteinase K. Although results are generally satisfactory, difficulties often arise regarding the mucosal nature of the specimens. Moreover, uncertainties exist regarding the optimal conditions for sample storage. This study evaluated various technologies for processing and storing samples. Results enabled us to select a method for optimizing sample processing, with performance comparable to commercial methods and far lower costs. The experiments designed to assess the conservation of samples enabled us to obtain valuable information to guide the referral of samples from patient care centres to laboratories where such samples are processed by molecular methods. PMID:20589331
Severtson, Dustin; Flower, Ken; Nansen, Christian
2016-08-01
The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact. PMID:27371709
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Clague, D; Weisgraber, T; Rockway, J; McBride, K
2006-02-12
The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B.; Sanz, Crickette; Reed, Patricia; Olson, Sarah H.; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-01-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. PMID:25330245
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B; Sanz, Crickette; Reed, Patricia; Olson, Sarah H; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-02-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. PMID:25330245
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Seze, G.
1991-01-01
Simulated cloud/hole fields as well as Landsat imagery are used in a computer model to evaluate several proposed sampling patterns and shot management schemes for pulsed space-based Doppler lidars. Emphasis is placed on two proposed sampling strategies - one obtained from a conically scanned single telescope and the other from four fixed telescopes that are sequentially used by one laser. The question of whether there are any sampling patterns that maximize the number of resolution areas with vertical soundings to the PBL is addressed.
A sampling optimization analysis of soil-bugs diversity (Crustacea, Isopoda, Oniscidea).
Messina, Giuseppina; Cazzolla Gatti, Roberto; Droutsa, Angeliki; Barchitta, Martina; Pezzino, Elisa; Agodi, Antonella; Lombardo, Bianca Maria
2016-01-01
Biological diversity analysis is among the most informative approaches to describe communities and regional species compositions. Soil ecosystems include large numbers of invertebrates, among which soil bugs (Crustacea, Isopoda, Oniscidea) play significant ecological roles. The aim of this study was to provide advices to optimize the sampling effort, to efficiently monitor the diversity of this taxon, to analyze its seasonal patterns of species composition, and ultimately to understand better the coexistence of so many species over a relatively small area. Terrestrial isopods were collected at the Natural Reserve "Saline di Trapani e Paceco" (Italy), using pitfall traps monthly monitored over 2 years. We analyzed parameters of α- and β-diversity and calculated a number of indexes and measures to disentangle diversity patterns. We also used various approaches to analyze changes in biodiversity over time, such as distributions of species abundances and accumulation and rarefaction curves. As concerns species richness and total abundance of individuals, spring resulted the best season to monitor Isopoda, to reduce sampling efforts, and to save resources without losing information, while in both years abundances were maximum between summer and autumn. This suggests that evaluations of β-diversity are maximized if samples are first collected during the spring and then between summer and autumn. Sampling during these coupled seasons allows to collect a number of species close to the γ-diversity (24 species) of the area. Finally, our results show that seasonal shifts in community composition (i.e., dynamic fluctuations in species abundances during the four seasons) may minimize competitive interactions, contribute to stabilize total abundances, and allow the coexistence of phylogenetically close species within the ecosystem. PMID:26811784
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
2001-01-01
A closed-loop optimal neural-network controller technique was developed to optimize rotorcraft aeromechanical behaviour. This technique utilities a neural-network scheme to provide a general non-linear model of the rotorcraft. A modem constrained optimisation method is used to determine and update the constants in the neural-network plant model as well as to determine the optimal control vector. Current data is read, weighted, and added to a sliding data window. When the specified maximum number of data sets allowed in the data window is exceeded, the oldest data set is and the remaining data sets are re-weighted. This procedure provides at least four additional degrees-of-freedom in addition to the size and geometry of the neural-network itself with which to optimize the overall operation of the controller. These additional degrees-of-freedom are: 1. the maximum length of the sliding data window, 2. the frequency of neural-network updates, 3. the weighting of the individual data sets within the sliding window, and 4. the maximum number of optimisation iterations used for the neural-network updates.
A simple optimized microwave digestion method for multielement monitoring in mussel samples
NASA Astrophysics Data System (ADS)
Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.
2004-04-01
With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.
NASA Astrophysics Data System (ADS)
Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.
2012-01-01
There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.
Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei
2012-07-15
Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors.
Riva-Murray, Karen; Bradley, Paul M; Scudder Eikenberry, Barbara C; Knightes, Christopher D; Journey, Celeste A; Brigham, Mark E; Button, Daniel T
2013-06-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hg(fish)) divided by the water Hg concentration (Hg(water)) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hg(water) sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hg(water) estimates. Models were evaluated for parsimony, using Akaike's Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg-UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics. PMID:23668662
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors
Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.
2013-01-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.
Fauvelle, Vincent; Mazzella, Nicolas; Belles, Angel; Moreira, Aurélie; Allan, Ian J; Budzinski, Hélène
2014-05-01
This paper presents an optimization of the pharmaceutical Polar Organic Chemical Integrative Sampler (POCIS-200) under controlled laboratory conditions for the sampling of acidic (2,4-dichlorophenoxyacetic acid (2,4-D), acetochlor ethanesulfonic acid (ESA), acetochlor oxanilic acid, bentazon, dicamba, mesotrione, and metsulfuron) and polar (atrazine, diuron, and desisopropylatrazine) herbicides in water. Indeed, the conventional configuration of the POCIS-200 (46 cm(2) exposure window, 200 mg of Oasis® hydrophilic lipophilic balance (HLB) receiving phase) is not appropriate for the sampling of very polar and acidic compounds because they rapidly reach a thermodynamic equilibrium with the Oasis HLB receiving phase. Thus, we investigated several ways to extend the initial linear accumulation. On the one hand, increasing the mass of sorbent to 600 mg resulted in sampling rates (R s s) twice as high as those observed with 200 mg (e.g., 287 vs. 157 mL day(-1) for acetochlor ESA). Although detection limits could thereby be reduced, most acidic analytes followed a biphasic uptake, proscribing the use of the conventional first-order model and preventing us from estimating time-weighted average concentrations. On the other hand, reducing the exposure window (3.1 vs. 46 cm(2)) allowed linear accumulations of all analytes over 35 days, but R s s were dramatically reduced (e.g., 157 vs. 11 mL day(-1) for acetochlor ESA). Otherwise, the observation of biphasic releases of performance reference compounds (PRC), though mirroring acidic herbicide biphasic uptake, might complicate the implementation of the PRC approach to correct for environmental exposure conditions. PMID:24691721
Huang, Bao-Tian; Lu, Jia-Yang; Lin, Pei-Xian; Chen, Jian-Zhou; Li, De-Rui; Chen, Chuang-Zhen
2015-01-01
This study aimed to determine the optimal fraction scheme (FS) in patients with small peripheral non-small cell lung cancer (NSCLC) undergoing stereotactic body radiotherapy (SBRT) with the 4 × 12 Gy scheme as the reference. CT simulation data for sixteen patients diagnosed with primary NSCLC or metastatic tumor with a single peripheral lesion ≤3 cm were used in this study. Volumetric modulated arc therapy (VMAT) plans were designed based on ten different FS of 1 × 25 Gy, 1 × 30 Gy, 1 × 34 Gy, 3 × 15 Gy, 3 × 18 Gy, 3 × 20 Gy, 4 × 12 Gy, 5 × 12 Gy, 6 × 10 Gy and 10 × 7 Gy. Five different radiobiological models were employed to predict the tumor control probability (TCP) value. Three other models were utilized to estimate the normal tissue complication probability (NTCP) value to the lung and the modified equivalent uniform dose (mEUD) value to the chest wall (CW). The 1 × 30 Gy regimen is recommended to achieve 4.2% higher TCP and slightly higher NTCP and mEUD values to the lung and CW compared with the 4 × 12 Gy schedule, respectively. This regimen also greatly shortens the treatment duration. However, the 3 × 15 Gy schedule is suggested in patients where the lung-to-tumor volume ratio is small or where the tumor is adjacent to the CW. PMID:26657569
Huang, Bao-Tian; Lu, Jia-Yang; Lin, Pei-Xian; Chen, Jian-Zhou; Li, De-Rui; Chen, Chuang-Zhen
2015-01-01
This study aimed to determine the optimal fraction scheme (FS) in patients with small peripheral non-small cell lung cancer (NSCLC) undergoing stereotactic body radiotherapy (SBRT) with the 4 × 12 Gy scheme as the reference. CT simulation data for sixteen patients diagnosed with primary NSCLC or metastatic tumor with a single peripheral lesion ≤3 cm were used in this study. Volumetric modulated arc therapy (VMAT) plans were designed based on ten different FS of 1 × 25 Gy, 1 × 30 Gy, 1 × 34 Gy, 3 × 15 Gy, 3 × 18 Gy, 3 × 20 Gy, 4 × 12 Gy, 5 × 12 Gy, 6 × 10 Gy and 10 × 7 Gy. Five different radiobiological models were employed to predict the tumor control probability (TCP) value. Three other models were utilized to estimate the normal tissue complication probability (NTCP) value to the lung and the modified equivalent uniform dose (mEUD) value to the chest wall (CW). The 1 × 30 Gy regimen is recommended to achieve 4.2% higher TCP and slightly higher NTCP and mEUD values to the lung and CW compared with the 4 × 12 Gy schedule, respectively. This regimen also greatly shortens the treatment duration. However, the 3 × 15 Gy schedule is suggested in patients where the lung-to-tumor volume ratio is small or where the tumor is adjacent to the CW. PMID:26657569
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart
2010-12-01
A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer. PMID:20655159
Barau, Caroline; Furlan, Valérie; Debray, Dominique; Taburet, Anne-Marie; Barrail-Tran, Aurélie
2012-01-01
AIMS The aims were to estimate the mycophenolic acid (MPA) population pharmacokinetic parameters in paediatric liver transplant recipients, to identify the factors affecting MPA pharmacokinetics and to develop a limited sampling strategy to estimate individual MPA AUC(0,12 h). METHODS Twenty-eight children, 1.1 to 18.0 years old, received oral mycophenolate mofetil (MMF) therapy combined with either tacrolimus (n= 23) or ciclosporin (n= 5). The population parameters were estimated from a model-building set of 16 intensive pharmacokinetic datasets obtained from 16 children. The data were analyzed by nonlinear mixed effect modelling, using a one compartment model with first order absorption and first order elimination and random effects on the absorption rate (ka), the apparent volume of distribution (V/F) and apparent clearance (CL/F). RESULTS Two covariates, time since transplantation (≤ and >6 months) and age affected MPA pharmacokinetics. ka, estimated at 1.7 h−1 at age 8.7 years, exhibited large interindividual variability (308%). V/F, estimated at 64.7 l, increased about 2.3 times in children during the immediate post transplantation period. This increase was due to the increase in the unbound MPA fraction caused by the low albumin concentration. CL/F was estimated at 12.7 l h−1. To estimate individual AUC(0,12 h), the pharmacokinetic parameters obtained with the final model, including covariates, were coded in Adapt II® software, using the Bayesian approach. The AUC(0,12 h) estimated from concentrations measured 0, 1 and 4 h after administration of MMF did not differ from reference values. CONCLUSIONS This study allowed the estimation of the population pharmacokinetic MPA parameters. A simple sampling procedure is suggested to help to optimize pediatric patient care. PMID:22329639
Acoustic investigations of lakes as justification for the optimal location of core sampling
NASA Astrophysics Data System (ADS)
Krylov, P.; Nourgaliev, D.; Yasonov, P.; Kuzin, D.
2014-12-01
Lacustrine sediments contain a long, high-resolution record of sedimentation processes associated with changes in the environment. Paleomagnetic study of the properties of these sediments provide a detailed trace the changes in the paleoenvironment. However, there are factors such as landslides, earthquakes, the presence of gas in the sediments affecting the disturbing sediment stratification. Seismic profiling allows to investigate in detail the bottom relief and get information about the thickness and structure of the deposits, which makes this method ideally suited for determining the configuration of the lake basin and the overlying lake sediment stratigraphy. Most seismic studies have concentrated on large and deep lakes containing a thick sedimentary sequence, but small and shallow lakes containing a thinner sedimentary column located in key geographic locations and geological settings can also provide a valuable record of Holocene history. Seimic data is crucial when choosing the optimal location of core sampling. Thus, continuous seismic profiling should be used regularly before coring lake sediments for the reconstruction of paleoclimate. We have carried out seismic profiling on lakes Balkhash (Kazakhstan), Yarovoye, Beloe, Aslykul and Chebarkul (Russia). The results of the field work will be presented in the report. The work is performed according to the Russian Government Program of Competitive Growth of Kazan Federal University also by RFBR research projects No. 14-05-31376 -a, 14-05-00785-a.
NASA Astrophysics Data System (ADS)
Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.
2007-07-01
The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Pietrzyńska, Monika; Voelkel, Adam
2014-11-01
In-needle extraction was applied for preparation of aqueous samples. This technique was used for direct isolation of analytes from liquid samples which was achieved by forcing the flow of the sample through the sorbent layer: silica or polymer (styrene/divinylbenzene). Specially designed needle was packed with three different sorbents on which the analytes (phenol, p-benzoquinone, 4-chlorophenol, thymol and caffeine) were retained. Acceptable sampling conditions for direct analysis of liquid sample were selected. Experimental data collected from the series of liquid samples analysis made with use of in-needle device showed that the effectiveness of the system depends on various parameters such as breakthrough volume and the sorption capacity, effect of sampling flow rate, solvent effect on elution step, required volume of solvent for elution step. The optimal sampling flow rate was in range of 0.5-2 mL/min, the minimum volume of solvent was at 400 µL level. PMID:25127610
NASA Astrophysics Data System (ADS)
Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang
2010-05-01
Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named “Phased Searching with NEAT in a Time-Scaled Framework” was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers – SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework – are performing comparably well in our mammographic mass detection scheme. PMID:25392680
Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K
2016-01-01
Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions. PMID:26298464
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Giuliano, Anna R.; Nielson, Carrie M.; Flores, Roberto; Dunne, Eileen F.; Abrahamsen, Martha; Papenfuss, Mary R.; Markowitz, Lauri E.; Smith, Danelle; Harris, Robin B.
2014-01-01
Background Human papillomavirus (HPV) infection in men contributes to infection and cervical disease in women as well as to disease in men. This study aimed to determine the optimal anatomic site(s) for HPV detection in heterosexual men. Methods A cross-sectional study of HPV infection was conducted in 463 men from 2003 to 2006. Urethral, glans penis/coronal sulcus, penile shaft/prepuce, scrotal, perianal, anal canal, semen, and urine samples were obtained. Samples were analyzed for sample adequacy and HPV DNA by polymerase chain reaction and genotyping. To determine the optimal sites for estimating HPV prevalence, site-specific prevalences were calculated and compared with the overall prevalence. Sites and combinations of sites were excluded until a recalculated prevalence was reduced by <5% from the overall prevalence. Results The overall prevalence of HPV was 65.4%. HPV detection was highest at the penile shaft (49.9% for the full cohort and 47.9% for the subcohort of men with complete sampling), followed by the glans penis/coronal sulcus (35.8% and 32.8%) and scrotum (34.2% and 32.8%). Detection was lowest in urethra (10.1% and 10.2%) and semen (5.3% and 4.8%) samples. Exclusion of urethra, semen, and either perianal, scrotal, or anal samples resulted in a <5% reduction in prevalence. Conclusions At a minimum, the penile shaft and the glans penis/coronal sulcus should be sampled in heterosexual men. A scrotal, perianal, or anal sample should also be included for optimal HPV detection. PMID:17955432
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them. PMID
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
Spectral and time domain OCT: a tool for optimal imaging of biological samples
NASA Astrophysics Data System (ADS)
Szkulmowski, Maciej; Gorczynska, Iwona; Szlag, Daniel; Wojtkowski, Maciej
2012-01-01
Spectral and Time domain OCT (STdOCT) is a data analysis scheme proposed for sensitive Doppler imaging. In this work we show that it has an additional feature: when compared to those created using complex or amplitude averaging, tomograms prepared using STdOCT have the highest contrast to noise ratio and preserve high signal to noise ratio and image dynamic range. Images of uniformly scattering phantom as well as images of human retina in vivo prepared with three different techniques are shown.
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-12-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Delgado, J; Moure, J C; Vives-Gilabert, Y; Delfino, M; Espinosa, A; Gómez-Ansón, B
2014-07-01
A scheme to significantly speed up the processing of MRI with FreeSurfer (FS) is presented. The scheme is aimed at maximizing the productivity (number of subjects processed per unit time) for the use case of research projects with datasets involving many acquisitions. The scheme combines the already existing GPU-accelerated version of the FS workflow with a task-level parallel scheme supervised by a resource scheduler. This allows for an optimum utilization of the computational power of a given hardware platform while avoiding problems with shortages of platform resources. The scheme can be executed on a wide variety of platforms, as its implementation only involves the script that orchestrates the execution of the workflow components and the FS code itself requires no modifications. The scheme has been implemented and tested on a commodity platform within the reach of most research groups (a personal computer with four cores and an NVIDIA GeForce 480 GTX graphics card). Using the scheduled task-level parallel scheme, a productivity above 0.6 subjects per hour is achieved on the test platform, corresponding to a speedup of over six times compared to the default CPU-only serial FS workflow. PMID:24430512
Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y
2011-12-01
There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample. PMID:22433892
Technology Transfer Automated Retrieval System (TEKTRAN)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
NASA Astrophysics Data System (ADS)
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
The optimal process of self-sampling in fisheries: lessons learned in the Netherlands.
Kraan, M; Uhlmann, S; Steenbergen, J; Van Helmond, A T M; Van Hoof, L
2013-10-01
At-sea sampling of commercial fishery catches by observers is a relatively expensive exercise. The fact that an observer has to stay on-board for the duration of the trip results in clustered samples and effectively small sample sizes, whereas the aim is to make inferences regarding several trips from an entire fleet. From this perspective, sampling by fishermen themselves (self-sampling) is an attractive alternative, because a larger number of trips can be sampled at lower cost. Self-sampling should not be used too casually, however, as there are often issues of data-acceptance related to it. This article shows that these issues are not easily dealt with in a statistical manner. Improvements might be made if self-sampling is understood as a form of cooperative research. Cooperative research has a number of dilemmas and benefits associated with it. This article suggests that if the guidelines for cooperative research are taken into account, the benefits are more likely to materialize. Secondly, acknowledging the dilemmas, and consciously dealing with them might lay the basis to trust-building, which is an essential element in the acceptance of data derived from self-sampling programmes. PMID:24090557
Reynolds, Kaycee N; Loecke, Terrance D; Burgin, Amy J; Davis, Caroline A; Riveros-Iregui, Diego; Thomas, Steven A; St Clair, Martin A; Ward, Adam S
2016-06-21
Understanding linked hydrologic and biogeochemical processes such as nitrate loading to agricultural streams requires that the sampling bias and precision of monitoring strategies be known. An existing spatially distributed, high-frequency nitrate monitoring network covering ∼40% of Iowa provided direct observations of in situ nitrate concentrations at a temporal resolution of 15 min. Systematic subsampling of nitrate records allowed for quantification of uncertainties (bias and precision) associated with estimates of various nitrate parameters, including: mean nitrate concentration, proportion of samples exceeding the nitrate drinking water standard (DWS), peak (>90th quantile) nitrate concentration, and nitrate flux. We subsampled continuous records for 47 site-year combinations mimicking common, but labor-intensive, water-sampling regimes (e.g., time-interval, stage-triggered, and dynamic-discharge storm sampling). Our results suggest that time-interval sampling most efficiently characterized all nitrate parameters, except at coarse frequencies for nitrate flux. Stage-triggered storm sampling most precisely captured nitrate flux when less than 0.19% of possible 15 min observations for a site-year were used. The time-interval strategy had the greatest return on sampling investment by most precisely and accurately quantifying nitrate parameters per sampling effort. These uncertainty estimates can aid in designing sampling strategies focused on nitrate monitoring in the tile-drained Midwest or similar agricultural regions. PMID:27192208
NASA Astrophysics Data System (ADS)
Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.
2009-06-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The ``full'' system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The ``reference'' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.
2009-12-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The "full" system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The "reference" system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils. PMID:26449847
Ramyachitra, D.; Sofia, M.; Manikandan, P.
2015-01-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
NASA Astrophysics Data System (ADS)
Amat-Roldan, Ivan; Cormack, Iain G.; Artigas, David; Loza-Alvarez, Pablo
2004-09-01
In this paper we report the use of a starch as a non-linear medium for characterising ultrashort pulses. The starch suspension in water is sandwiched between a slide holder and a cover-slip and placed within the sample plane of the nonlinear microscope. This simple arrangement enables direct measurement of the pulse where they interact with the sample.
Yang, Yuanzhong; Boysen, Reinhard I; Hearn, Milton T W
2006-07-15
A versatile experimental approach is described to achieve very high sensitivity analysis of peptides by capillary electrophoresis-mass spectrometry with sheath flow configuration based on optimization of field-amplified sample injection. Compared to traditional hydrodynamic injection methods, signal enhancement in terms of detection sensitivity of the bioanalytes by more than 3000-fold can be achieved. The effects of injection conditions, composition of the acid and organic solvent in the sample solution, length of the water plug, sample injection time, and voltage on the efficiency of the sample stacking have been systematically investigated, with peptides in the low-nanomolar (10(-9) M) range readily detected under the optimized conditions. Linearity of the established stacking method was found to be excellent over 2 orders of magnitude of concentration. The method was further evaluated for the analysis of low concentration bioactive peptide mixtures and tryptic digests of proteins. A distinguishing feature of the described approach is that it can be employed directly for the analysis of low-abundance protein fragments generated by enzymatic digestion and a reversed-phase-based sample-desalting procedure. Thus, rapid identification of protein fragments as low-abundance analytes can be achieved with this new approach by comparison of the actual tandem mass spectra of selected peptides with the predicted fragmentation patterns using online database searching algorithms. PMID:16841892
Luczak, Magdalena; Marczak, Lukasz; Stobiecki, Maciej
2014-01-01
Shotgun proteomic methods involving iTRAQ (isobaric tags for relative and absolute quantitation) peptide labeling facilitate quantitative analyses of proteomes and searches for useful biomarkers. However, the plasma proteome's complexity and the highly dynamic plasma protein concentration range limit the ability of conventional approaches to analyze and identify a large number of proteins, including useful biomarkers. The goal of this paper is to elucidate the best approach for plasma sample pretreatment for MS- and iTRAQ-based analyses. Here, we systematically compared four approaches, which include centrifugal ultrafiltration, SCX chromatography with fractionation, affinity depletion, and plasma without fractionation, to reduce plasma sample complexity. We generated an optimized protocol for quantitative protein analysis using iTRAQ reagents and an UltrafleXtreme (Bruker Daltonics) MALDI TOF/TOF mass spectrometer. Moreover, we used a simple, rapid, efficient, but inexpensive sample pretreatment technique that generated an optimal opportunity for biomarker discovery. We discuss the results from the four sample pretreatment approaches and conclude that SCX chromatography without affinity depletion is the best plasma sample preparation pretreatment method for proteome analysis. Using this technique, we identified 1,780 unique proteins, including 1,427 that were quantified by iTRAQ with high reproducibility and accuracy. PMID:24988083
Improved estimates of forest vegetation structure and biomass with a LiDAR-optimized sampling design
NASA Astrophysics Data System (ADS)
Hawbaker, Todd J.; Keuler, Nicholas S.; Lesak, Adrian A.; Gobakken, Terje; Contrucci, Kirk; Radeloff, Volker C.
2009-06-01
LiDAR data are increasingly available from both airborne and spaceborne missions to map elevation and vegetation structure. Additionally, global coverage may soon become available with NASA's planned DESDynI sensor. However, substantial challenges remain to using the growing body of LiDAR data. First, the large volumes of data generated by LiDAR sensors require efficient processing methods. Second, efficient sampling methods are needed to collect the field data used to relate LiDAR data with vegetation structure. In this paper, we used low-density LiDAR data, summarized within pixels of a regular grid, to estimate forest structure and biomass across a 53,600 ha study area in northeastern Wisconsin. Additionally, we compared the predictive ability of models constructed from a random sample to a sample stratified using mean and standard deviation of LiDAR heights. Our models explained between 65 to 88% of the variability in DBH, basal area, tree height, and biomass. Prediction errors from models constructed using a random sample were up to 68% larger than those from the models built with a stratified sample. The stratified sample included a greater range of variability than the random sample. Thus, applying the random sample model to the entire population violated a tenet of regression analysis; namely, that models should not be used to extrapolate beyond the range of data from which they were constructed. Our results highlight that LiDAR data integrated with field data sampling designs can provide broad-scale assessments of vegetation structure and biomass, i.e., information crucial for carbon and biodiversity science.
O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A
2014-10-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960
O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.
2014-01-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960
Siqueira, Glécio Machado; Dafonte, Jorge Dafonte; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; Silva, Ênio Farias França e
2014-01-01
This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from −0.36 to −0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging. PMID:25610899
OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER
The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2015-10-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
NASA Astrophysics Data System (ADS)
Lin, Rongsheng; Burke, David T.; Burns, Mark A.
2004-03-01
In recent years, there has been tremendous interest in developing a highly integrated DNA analysis system using microfabrication techniques. With the success of incorporating sample injection, reaction, separation and detection onto a monolithic silicon device, addition of otherwise time-consuming components in macroworld such as sample preparation is gaining more and more attention. In this paper, we designed and fabricated a miniaturized device, capable of separating size-fractioned DNA sample and extracting the band of interest. In order to obtain pure target band, a novel technique utilizing shaping electric field is demonstrated. Both theoretical analysis and experimental data shows significant agreement in designing appropriate electrode structures to achieve the desired electric field distribution. This technique has a very simple fabrication procedure and can be readily added with other existing components to realize a highly integrated "lab-on-a-chip" system for DNA analysis.
Optimal sampling strategy for estimation of spatial genetic structure in tree populations.
Cavers, S; Degen, B; Caron, H; Lemes, M R; Margis, R; Salgueiro, F; Lowe, A J
2005-10-01
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms. PMID:16030529
Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats
Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579
Fillers, W Steven
2004-12-01
Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system. PMID:15674027
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
Kilambi, Himabindu V; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
An evaluation of optimal methods for avian influenza virus sample collection
Technology Transfer Automated Retrieval System (TEKTRAN)
Sample collection and transport are critical components of any diagnostic testing program and due to the amount of avian influenza virus (AIV) testing in the U.S. and worldwide, small improvements in sensitivity and specificity can translate into substantial cost savings from better test accuracy. ...
Siebenmann, K. )
1993-10-01
The primary focus of the initial stages of a remedial investigation is to collect useful data for source identification and determination of the extent of soil contamination. To achieve this goal, soil samples should be collected at locations where the maximum concentration of contaminants exist. This study was conducted to determine the optimum strategy for selecting soil sample locations within a boring. Analytical results from soil samples collected during the remedial investigation of a Department of Defense Superfund site were used for the analysis. Trichloroethene (TCE) and tetrachloroethene (PCE) results were compared with organic vapor monitor (OVM) readings, lithologies, and organic carbon content to determine if these parameters can be used to choose soil sample locations in the field that contain the maximum concentration of these analytes within a soil boring or interval. The OVM was a handheld photoionization detector (PID) for screening the soil core to indicate areas of VOC contamination. The TCE and PCE concentrations were compared across lithologic contacts and within each lithologic interval. The organic content used for this analysis was visually estimated by the geologist during soil logging.
Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…
Superposition Enhanced Nested Sampling
NASA Astrophysics Data System (ADS)
Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan
2014-07-01
The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT
Optimized methods for extracting circulating small RNAs from long-term stored equine samples.
Unger, Lucia; Fouché, Nathalie; Leeb, Tosso; Gerber, Vincent; Pacholewska, Alicja
2016-01-01
Circulating miRNAs in body fluids, particularly serum, are promising candidates for future routine biomarker profiling in various pathologic conditions in human and veterinary medicine. However, reliable standardized methods for miRNA extraction from equine serum and fresh or archived whole blood are sorely lacking. We systematically compared various miRNA extraction methods from serum and whole blood after short and long-term storage without addition of RNA stabilizing additives prior to freezing. Time of storage at room temperature prior to freezing did not affect miRNA quality in serum. Furthermore, we showed that miRNA of NGS-sufficient quality can be recovered from blood samples after >10 years of storage at -80 °C. This allows retrospective analyses of miRNAs from archived samples. PMID:27356979
Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-12-19
The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1976-01-01
Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna
2014-02-01
A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.
2008-12-01
In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.
Hyberts, Sven G.; Frueh, Dominique P.; Arthanari, Haribabu; Wagner, Gerhard
2010-01-01
Non-uniform sampling (NUS) enables recording of multidimensional NMR data at resolutions matching the resolving power of modern instruments without using excessive measuring time. However, in order to obtain satisfying results, efficient reconstruction methods are needed. Here we describe an optimized version of the Forward Maximum entropy (FM) reconstruction method, which can reconstruct up to three indirect dimensions. For complex datasets, such as NOESY spectra, the performance of the procedure is enhanced by a distillation procedure that reduces artifacts stemming from intense peaks. PMID:19705283
Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan
2014-05-01
Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method
Optimal sample preparation to characterize corrosion in historical photographs with analytical TEM.
Grieten, Eva; Caen, Joost; Schryvers, Dominique
2014-10-01
An alternative focused ion beam preparation method is used for sampling historical photographs containing metallic nanoparticles in a polymer matrix. We use the preparation steps of classical ultra-microtomy with an alternative final sectioning with a focused ion beam. Transmission electron microscopy techniques show that the lamella has a uniform thickness, which is an important factor for analytical transmission electron microscopy. Furthermore, the method maintains the spatial distribution of nanoparticles in the soft matrix. The results are compared with traditional preparation techniques such as ultra-microtomy and classical focused ion beam milling. PMID:25256650
Optimal Media for Use in Air Sampling To Detect Cultivable Bacteria and Fungi in the Pharmacy
Joseph, Riya Augustin; Le, Theresa V.; Trevino, Ernest A.; Schaeffer, M. Frances; Vance, Paula H.
2013-01-01
Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl. PMID:23903551
Optimal media for use in air sampling to detect cultivable bacteria and fungi in the pharmacy.
Weissfeld, Alice S; Joseph, Riya Augustin; Le, Theresa V; Trevino, Ernest A; Schaeffer, M Frances; Vance, Paula H
2013-10-01
Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl. PMID:23903551
Optimal bandpass sampling strategies for enhancing the performance of a phase noise meter
NASA Astrophysics Data System (ADS)
Angrisani, Leopoldo; Schiano Lo Moriello, Rosario; D'Arco, Mauro; Greenhall, Charles
2008-10-01
Measurement of phase noise affecting oscillators or clocks is a fundamental practice whenever the need of a reliable time base is of primary concern. In spite of the number of methods or techniques either available in the literature or implemented as personalities in general-purpose equipment, very accurate measurement results can be gained only through expensive, dedicated instruments. To offer a cost-effective alternative, the authors have already realized a DSP-based phase noise meter, capable of assuring good performance and real-time operation. The meter, however, suffers from a reduced frequency range (about 250 kHz), and needs an external time base for input signal digitization. To overcome these drawbacks, the authors propose the use of bandpass sampling strategies to enlarge the frequency range, and of an internal time base to make standalone operation much more feasible. After some remarks on the previous version of the meter, key features of the adopted time base and proposed sampling strategies are described in detail. Results of experimental tests, carried out on sinusoidal signals provided both by function and arbitrary waveform generators, are presented and discussed; evidence of the meter's reliability and efficacy is finally given.
Baden, Tom; Schubert, Timm; Chang, Le; Wei, Tao; Zaichuk, Mariana; Wissinger, Bernd; Euler, Thomas
2013-12-01
For efficient coding, sensory systems need to adapt to the distribution of signals to which they are exposed. In vision, natural scenes above and below the horizon differ in the distribution of chromatic and achromatic features. Consequently, many species differentially sample light in the sky and on the ground using an asymmetric retinal arrangement of short- (S, "blue") and medium- (M, "green") wavelength-sensitive photoreceptor types. Here, we show that in mice this photoreceptor arrangement provides for near-optimal sampling of natural achromatic contrasts. Two-photon population imaging of light-driven calcium signals in the synaptic terminals of cone-photoreceptors expressing a calcium biosensor revealed that S, but not M cones, preferred dark over bright stimuli, in agreement with the predominance of dark contrasts in the sky but not on the ground. Therefore, the different cone types do not only form the basis of "color vision," but in addition represent distinct (achromatic) contrast-selective channels. PMID:24314730
Design Of A Sorbent/desorbent Unit For Sample Pre-treatment Optimized For QMB Gas Sensors
Pennazza, G.; Cristina, S.; Santonico, M.; Martinelli, E.; Di Natale, C.; D'Amico, A.; Paolesse, R.
2009-05-23
Sample pre-treatment is a typical procedure in analytical chemistry aimed at improving the performance of analytical systems. In case of gas sensors sample pre-treatment systems are devised to overcome sensors limitations in terms of selectivity and sensitivity. For this purpose, systems based on adsorption and desorption processes driven by temperature conditioning have been illustrated. The involvement of large temperature ranges may pose problems when QMB gas sensors are used. In this work a study of such influences on the overall sensing properties of QMB sensors are illustrated. The results allowed the design of a pre-treatment unit coupled with a QMB gas sensors array optimized to operate in a suitable temperatures range. The performance of the system are illustrated by the partially separation of water vapor in a gas mixture, and by substantial improvement of the signal to noise ratio.
AlMasoud, Najla; Correa, Elon; Trivedi, Drupad K; Goodacre, Royston
2016-06-21
Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has successfully been used for the analysis of high molecular weight compounds, such as proteins and nucleic acids. By contrast, analysis of low molecular weight compounds with this technique has been less successful due to interference from matrix peaks which have a similar mass to the target analyte(s). Recently, a variety of modified matrices and matrix additives have been used to overcome these limitations. An increased interest in lipid analysis arose from the feasibility of correlating these components with many diseases, e.g. atherosclerosis and metabolic dysfunctions. Lipids have a wide range of chemical properties making their analysis difficult with traditional methods. MALDI-TOF-MS shows excellent potential for sensitive and rapid analysis of lipids, and therefore this study focuses on computational-analytical optimization of the analysis of five lipids (4 phospholipids and 1 acylglycerol) in complex mixtures using MALDI-TOF-MS with fractional factorial design (FFD) and Pareto optimality. Five different experimental factors were investigated using FFD which reduced the number of experiments performed by identifying 720 key experiments from a total of 8064 possible analyses. Factors investigated included the following: matrices, matrix preparations, matrix additives, additive concentrations, and deposition methods. This led to a significant reduction in time and cost of sample analysis with near optimal conditions. We discovered that the key factors used to produce high quality spectra were the matrix and use of appropriate matrix additives. PMID:27228355
Zhang, Zulin; Troldborg, Mads; Yates, Kyari; Osprey, Mark; Kerr, Christine; Hallett, Paul D; Baggaley, Nikki; Rhind, Stewart M; Dawson, Julian J C; Hough, Rupert L
2016-11-01
In many agricultural catchments of Europe and North America, pesticides occur at generally low concentrations with significant temporal variation. This poses several challenges for both monitoring and understanding ecological risks/impacts of these chemicals. This study aimed to compare the performance of passive and spot sampling strategies given the constraints of typical regulatory monitoring. Nine pesticides were investigated in a river currently undergoing regulatory monitoring (River Ugie, Scotland). Within this regulatory framework, spot and passive sampling were undertaken to understand spatiotemporal occurrence, mass loads and ecological risks. All the target pesticides were detected in water by both sampling strategies. Chlorotoluron was observed to be the dominant pesticide by both spot (maximum: 111.8ng/l, mean: 9.35ng/l) and passive sampling (maximum: 39.24ng/l, mean: 4.76ng/l). The annual pesticide loads were estimated to be 2735g and 1837g based on the spot and passive sampling data, respectively. The spatiotemporal trend suggested that agricultural activities were the primary source of the compounds with variability in loads explained in large by timing of pesticide applications and rainfall. The risk assessment showed chlorotoluron and chlorpyrifos posed the highest ecological risks with 23% of the chlorotoluron spot samples and 36% of the chlorpyrifos passive samples resulting in a Risk Quotient greater than 0.1. This suggests that mitigation measures might need to be taken to reduce the input of pesticides into the river. The overall comparison of the two sampling strategies supported the hypothesis that passive sampling tends to integrate the contaminants over a period of exposure and allows quantification of contamination at low concentration. The results suggested that within a regulatory monitoring context passive sampling was more suitable for flux estimation and risk assessment of trace contaminants which cannot be diagnosed by spot
Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911
Chen, DI-WEN
2001-11-21
Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-01-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e. equivalent to ~ 8 ng of amino sugar carbon. Our results obtained from δ13C analysis of amino sugars in selected marine sediment samples showed that muramic acid had isotopic imprints from indigenous bacterial activities, whereas glucosamine and galactosamine were mainly derived from organic detritus. The analysis of stable carbon isotopic compositions of amino sugars opens a promising window for the investigation of microbial metabolisms in marine sediments and the deep marine biosphere.
Wang, Man-Juing; Tsai, Chih-Hsin; Hsu, Wei-Ya; Liu, Ju-Tsung; Lin, Cheng-Huang
2009-02-01
The optimal separation conditions and online sample concentration for N,N-dimethyltryptamine (DMT) and related compounds, including alpha-methyltryptamine (AMT), 5-methoxy-AMT (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DiPT), 5-methoxy-DMT (5-MeO-DMT), and 5-methoxy-N,N-DiPT (5-MeO-DiPT), using micellar EKC (MEKC) with UV-absorbance detection are described. The LODs (S/N = 3) for MEKC ranged from 1.0 1.8 microg/mL. Use of online sample concentration methods, including sweeping-MEKC and cation-selective exhaustive injection-sweep-MEKC (CSEI-sweep-MEKC) improved the LODs to 2.2 8.0 ng/mL and 1.3 2.7 ng/mL, respectively. In addition, the order of migration of the nine tryptamines was investigated. A urine sample, obtained by spiking urine collected from a human volunteer with DMT, was also successfully examined. PMID:19137528
Llompart, Maria; Lourido, Mercedes; Landin, Pedro; García-Jares, Carmen; Cela, Rafael
2002-07-19
Solid-phase microextraction (SPME) coupled to gas chromatography-mass spectrometry has been applied to the extraction of 30 phenol derivatives from water samples. Analytes were in situ acetylated and headspace solid-phase microextraction was performed. Different parameters affecting extraction efficiency were studied. Optimization of temperature, type of microextraction fiber and volume of sample has been done by means of a mixed-level categorical experimental design, which allows to study main effects and second order interactions. Five different fiber coatings were employed in this study; also, extraction temperature was studied at three levels. Both factors, fiber coating and extraction temperature, were important to achieve high sensitivity. Moreover, these parameters showed a significant interaction, which indicates the different kinetic behavior of the SPME process when different coatings are used. It was found that 75 microm carboxen-polydimethylsiloxane and 100 microm polydimethylsiloxane, yield the highest responses. The first one is specially appropriated for phenol, methylphenols and low chlorinated chlorophenols and the second one for highly chlorinated phenols. The two methods proposed in this study shown good linearity and precision. Practical applicability was demonstrated through the analysis of a real sewage water sample, contaminated with phenols. PMID:12187964
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges. PMID:27458364
Jauzein, Cécile; Fricke, Anna; Mangialajo, Luisa; Lemée, Rodolphe
2016-06-15
In the framework of monitoring of benthic harmful algal blooms (BHABs), the most commonly reported sampling strategy is based on the collection of macrophytes. However, this methodology has some inherent problems. A potential alternative method uses artificial substrates that collect resuspended benthic cells. The current study defines main improvements in this technique, through the use of fiberglass screens during a bloom of Ostreopsis cf. ovata. A novel set-up for the deployment of artificial substrates in the field was tested, using an easy clip-in system that helped restrain substrates perpendicular to the water flow. An experiment was run in order to compare the cell collection efficiency of different mesh sizes of fiberglass screens and results suggested an optimal porosity of 1-3mm. The present study goes further on showing artificial substrates, such as fiberglass screens, as efficient tools for the monitoring and mitigation of BHABs. PMID:27048690
Yu, Yuqi; Wang, Jinan; Shao, Qiang E-mail: Jiye.Shi@ucb.com Zhu, Weiliang E-mail: Jiye.Shi@ucb.com; Shi, Jiye E-mail: Jiye.Shi@ucb.com
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
NASA Astrophysics Data System (ADS)
Yu, Yuqi; Wang, Jinan; Shao, Qiang; Shi, Jiye; Zhu, Weiliang
2015-03-01
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
Lee, Jae Hwan; Jia, Chunrong; Kim, Yong Doo; Kim, Hong Hyun; Pham, Tien Thang; Choi, Young Seok; Seo, Young Un; Lee, Ike Woo
2012-01-01
Trimethylsilanol (TMSOH) can cause damage to surfaces of scanner lenses in the semiconductor industry, and there is a critical need to measure and control airborne TMSOH concentrations. This study develops a thermal desorption (TD)-gas chromatography (GC)-mass spectrometry (MS) method for measuring trace-level TMSOH in occupational indoor air. Laboratory method optimization obtained best performance when using dual-bed tube configuration (100 mg of Tenax TA followed by 100 mg of Carboxen 569), n-decane as a solvent, and a TD temperature of 300°C. The optimized method demonstrated high recovery (87%), satisfactory precision (<15% for spiked amounts exceeding 1 ng), good linearity (R2 = 0.9999), a wide dynamic mass range (up to 500 ng), low method detection limit (2.8 ng m−3 for a 20-L sample), and negligible losses for 3-4-day storage. The field study showed performance comparable to that in laboratory and yielded first measurements of TMSOH, ranging from 1.02 to 27.30 μg/m3, in the semiconductor industry. We suggested future development of real-time monitoring techniques for TMSOH and other siloxanes for better maintenance and control of scanner lens in semiconductor wafer manufacturing. PMID:22966229
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2015-12-01
Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.
Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry
Die, Jose V.; Roman, Belen; Flores, Fernando; Rowland, Lisa J.
2016-01-01
The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data. PMID:27014296
Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.
Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J
2016-01-01
The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data. PMID:27014296
Peuchen, Elizabeth H; Sun, Liangliang; Dovichi, Norman J
2016-07-01
Xenopus laevis is an important model organism in developmental biology. While there is a large literature on changes in the organism's transcriptome during development, the study of its proteome is at an embryonic state. Several papers have been published recently that characterize the proteome of X. laevis eggs and early-stage embryos; however, proteomic sample preparation optimizations have not been reported. Sample preparation is challenging because a large fraction (~90 % by weight) of the egg or early-stage embryo is yolk. We compared three common protein extraction buffer systems, mammalian Cell-PE LB(TM) lysing buffer (NP40), sodium dodecyl sulfate (SDS), and 8 M urea, in terms of protein extraction efficiency and protein identifications. SDS extracts contained the highest concentration of proteins, but this extract was dominated by a high concentration of yolk proteins. In contrast, NP40 extracts contained ~30 % of the protein concentration as SDS extracts, but excelled in discriminating against yolk proteins, which resulted in more protein and peptide identifications. We then compared digestion methods using both SDS and NP40 extraction methods with one-dimensional reverse-phase liquid chromatography-tandem mass spectrometry (RPLC-MS/MS). NP40 coupled to a filter-aided sample preparation (FASP) procedure produced nearly twice the number of protein and peptide identifications compared to alternatives. When NP40-FASP samples were subjected to two-dimensional RPLC-ESI-MS/MS, a total of 5171 proteins and 38,885 peptides were identified from a single stage of embryos (stage 2), increasing the number of protein identifications by 23 % in comparison to other traditional protein extraction methods. PMID:27137514
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-09-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.
Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon
2016-10-01
Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. PMID:27288706
Fakanya, Wellington M.; Tothill, Ibtisam E.
2014-01-01
The development of an electrochemical immunosensor for the biomarker, C-reactive protein (CRP), is reported in this work. CRP has been used to assess inflammation and is also used in a multi-biomarker system as a predictive biomarker for cardiovascular disease risk. A gold-based working electrode sensor was developed, and the types of electrode printing inks and ink curing techniques were then optimized. The electrodes with the best performance parameters were then employed for the construction of an immunosensor for CRP by immobilizing anti-human CRP antibody on the working electrode surface. A sandwich enzyme-linked immunosorbent assay (ELISA) was then constructed after sample addition by using anti-human CRP antibody labelled with horseradish peroxidase (HRP). The signal was generated by the addition of a mediator/substrate system comprised of 3,3,5',5'-Tetramethylbenzidine dihydrochloride (TMB) and hydrogen peroxide (H2O2). Measurements were conducted using chronoamperometry at −200 mV against an integrated Ag/AgCl reference electrode. A CRP limit of detection (LOD) of 2.2 ng·mL−1 was achieved in spiked serum samples, and performance agreement was obtained with reference to a commercial ELISA kit. The developed CRP immunosensor was able to detect a diagnostically relevant range of the biomarker in serum without the need for signal amplification using nanoparticles, paving the way for future development on a cardiac panel electrochemical point-of-care diagnostic device. PMID:25587427
Sharma, M; Todor, D; Fields, E
2014-06-01
Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.
Geophysical Inversion Through Hierarchical Scheme
NASA Astrophysics Data System (ADS)
Furman, A.; Huisman, J. A.
2010-12-01
Geophysical investigation is a powerful tool that allows non-invasive and non-destructive mapping of subsurface states and properties. However, non-uniqueness associated with the inversion process prevents the quantitative use of these methods. One major direction researchers are going is constraining the inverse problem by hydrological observations and models. An alternative to the commonly used direct inversion methods are global optimization schemes (such as genetic algorithms and Monte Carlo Markov Chain methods). However, the major limitation here is the desired high resolution of the tomographic image, which leads to a large number of parameters and an unreasonably high computational effort when using global optimization schemes. Two innovative schemes are presented here. First, a hierarchical approach is used to reduce the computational effort for the global optimization. Solution is achieved for coarse spatial resolution, and this solution is used as the starting point for finer scheme. We show that the computational effort is reduced in this way dramatically. Second, we use a direct ERT inversion as the starting point for global optimization. In this case preliminary results show that the outcome is not necessarily beneficial, probably because of spatial mismatch between the results of the direct inversion and the true resistivity field.
2011-01-01
Background There has been increased interest in the study of molecular survival mechanisms expressed by foodborne pathogens present on food surfaces. Determining genomic responses of these pathogens to antimicrobials is of particular interest since this helps to understand antimicrobial effects at the molecular level. Assessment of bacterial gene expression by transcriptomic analysis in response to these antimicrobials would aid prediction of the phenotypic behavior of the bacteria in the presence of antimicrobials. However, before transcriptional profiling approaches can be implemented routinely, it is important to develop an optimal method to consistently recover pathogens from the food surface and ensure optimal quality RNA so that the corresponding gene expression analysis represents the current response of the organism. Another consideration is to confirm that there is no interference from the "background" food or meat matrix that could mask the bacterial response. Findings Our study involved developing a food model system using chicken breast meat inoculated with mid-log Salmonella cells. First, we tested the optimum number of Salmonella cells required on the poultry meat in order to extract high quality RNA. This was analyzed by inoculating 10-fold dilutions of Salmonella on the chicken samples followed by RNA extraction. Secondly, we tested the effect of two different bacterial cell recovery solutions namely 0.1% peptone water and RNAprotect (Qiagen Inc.) on the RNA yield and purity. In addition, we compared the efficiency of sonication and bead beater methods to break the cells for RNA extraction. To check chicken nucleic acid interference on downstream Salmonella microarray experiments both chicken and Salmonella cDNA labeled with different fluorescent dyes were mixed together and hybridized on a single Salmonella array. Results of this experiment did not show any cross-hybridization signal from the chicken nucleic acids. In addition, we demonstrated the
Sun, Phillip Zhe; Wang, Enfeng; Cheung, Jerry S; Zhang, Xiaoan; Benner, Thomas; Sorensen, A Gregory
2011-10-01
Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) is capable of measuring dilute labile protons and microenvironmental properties. However, the CEST contrast is dependent upon experimental conditions-particularly, the radiofrequency (RF) irradiation scheme. Although continuous-wave RF irradiation has been used conventionally, the limited RF pulse duration or duty cycle of most clinical systems requires the use of pulsed RF irradiation. Here, the conventional numerical simulation is extended to describe pulsed-CEST MRI contrast as a function of RF pulse parameters (i.e., RF pulse duration and flip angle) and labile proton properties (i.e., exchange rate and chemical shift). For diamagnetic CEST agents undergoing slow or intermediate chemical exchange, simulation shows a linear regression relationship between the optimal mean RF power of pulsed-CEST MRI and continuous-wave-CEST MRI. The optimized pulsed-CEST contrast is approximately equal to that of continuous-wave-CEST MRI for exchange rates less than 50 s(-1), as confirmed experimentally using a multicompartment pH phantom. In the acute stroke animals, we showed that pulsed- and continuous-wave-amide proton CEST MRI demonstrated similar contrast. In summary, our study elucidated the RF irradiation dependence of pulsed-CEST MRI contrast, providing useful insights to guide its experimental optimization and quantification. PMID:21437977
Sun, Phillip Zhe; Wang, Enfeng; Cheung, Jerry S.; Zhang, Xiaoan; Benner, Thomas; Sorensen, A Gregory
2011-01-01
Chemical exchange saturation transfer (CEST) MRI is capable of measuring dilute labile protons and microenvironment properties; however, the CEST contrast is also dependent upon experimental conditions, particularly, the RF irradiation scheme. Although continuous-wave (CW) RF irradiation has been conventionally utilized, the RF pulse duration or duty cycle are limited on most clinical systems, for which pulsed RF irradiation must be chosen. Here, conventional numerical simulation was extended to describe pulsed-CEST MRI contrast as a function of RF pulse parameters (i.e., RF pulse duration and flip angle) and labile proton properties (i.e., exchange rate and chemical shift). For diamagnetic CEST agents undergoing slow/intermediate chemical exchange, our simulation showed a linear regression relationship between the optimal mean RF power for pulsed-CEST MRI and that of CW-CEST MRI. Worth noting, the optimized pulsed-CEST contrast was approximately equal to that of CW-CEST MRI for exchange rates below 50 s−1, as confirmed experimentally using a multi-compartment pH phantom. Moreover, acute stroke animals were imaged with both pulsed- and CW- amide protons CEST MRI, which showed similar contrast. In summary, our study elucidated the RF irradiation dependence of pulsed-CEST MRI contrast, providing useful insights to guide its experimental optimization and quantification. PMID:21437977
NASA Astrophysics Data System (ADS)
Guarieiro, Lílian Lefol Nani; Pereira, Pedro Afonso de Paula; Torres, Ednildo Andrade; da Rocha, Gisele Olimpio; de Andrade, Jailson B.
Biodiesel is emerging as a renewable fuel, hence becoming a promising alternative to fossil fuels. Biodiesel can form blends with diesel in any ratio, and thus could replace partially, or even totally, diesel fuel in diesel engines what would bring a number of environmental, economical and social advantages. Although a number of studies are available on regulated substances, there is a gap of studies on unregulated substances, such as carbonyl compounds, emitted during the combustion of biodiesel, biodiesel-diesel and/or ethanol-biodiesel-diesel blends. CC is a class of hazardous pollutants known to be participating in photochemical smog formation. In this work a comparison was carried out between the two most widely used CC collection methods: C18 cartridges coated with an acid solution of 2,4-dinitrophenylhydrazine (2,4-DNPH) and impinger bottles filled in 2,4-DNPH solution. Sampling optimization was performed using a 2 2 factorial design tool. Samples were collected from the exhaust emissions of a diesel engine with biodiesel and operated by a steady-state dynamometer. In the central body of factorial design, the average of the sum of CC concentrations collected using impingers was 33.2 ppmV but it was only 6.5 ppmV for C18 cartridges. In addition, the relative standard deviation (RSD) was 4% for impingers and 37% for C18 cartridges. Clearly, the impinger system is able to collect CC more efficiently, with lower error than the C18 cartridge system. Furthermore, propionaldehyde was nearly not sampled by C18 system at all. For these reasons, the impinger system was chosen in our study. The optimized sampling conditions applied throughout this study were: two serially connected impingers each containing 10 mL of 2,4-DNPH solution at a flow rate of 0.2 L min -1 during 5 min. A profile study of the C1-C4 vapor-phase carbonyl compound emissions was obtained from exhaust of pure diesel (B0), pure biodiesel (B100) and biodiesel-diesel mixtures (B2, B5, B10, B20, B50, B
Golubeva, Yelena G; Smith, Roberta M; Sternberg, Lawrence R
2013-01-01
Laser microdissection is an invaluable tool in medical research that facilitates collecting specific cell populations for molecular analysis. Diversity of research targets (e.g., cancerous and precancerous lesions in clinical and animal research, cell pellets, rodent embryos, etc.) and varied scientific objectives, however, present challenges toward establishing standard laser microdissection protocols. Sample preparation is crucial for quality RNA, DNA and protein retrieval, where it often determines the feasibility of a laser microdissection project. The majority of microdissection studies in clinical and animal model research are conducted on frozen tissues containing native nucleic acids, unmodified by fixation. However, the variable morphological quality of frozen sections from tissues containing fat, collagen or delicate cell structures can limit or prevent successful harvest of the desired cell population via laser dissection. The CryoJane Tape-Transfer System®, a commercial device that improves cryosectioning outcomes on glass slides has been reported superior for slide preparation and isolation of high quality osteocyte RNA (frozen bone) during laser dissection. Considering the reported advantages of CryoJane for laser dissection on glass slides, we asked whether the system could also work with the plastic membrane slides used by UV laser based microdissection instruments, as these are better suited for collection of larger target areas. In an attempt to optimize laser microdissection slide preparation for tissues of different RNA stability and cryosectioning difficulty, we evaluated the CryoJane system for use with both glass (laser capture microdissection) and membrane (laser cutting microdissection) slides. We have established a sample preparation protocol for glass and membrane slides including manual coating of membrane slides with CryoJane solutions, cryosectioning, slide staining and dissection procedure, lysis and RNA extraction that facilitated
Alves, Claudete; Fernandes, Christian; Dos Santos Neto, Alvaro José; Rodrigues, José Carlos; Costa Queiroz, Maria Eugênia; Lanças, Fernando Mauro
2006-07-01
Solid-phase microextraction (SPME)-liquid chromatography (LC) is used to analyze tricyclic antidepressant drugs desipramine, imipramine, nortriptyline, amitriptyline, and clomipramine (internal standard) in plasma samples. Extraction conditions are optimized using a 2(3) factorial design plus a central point to evaluate the influence of the time, temperature, and matrix pH. A Polydimethylsiloxane-divinylbenzene (60-mum film thickness) fiber is selected after the assessment of different types of coating. The chromatographic separation is realized using a C(18) column (150 x 4.6 mm, 5-microm particles), ammonium acetate buffer (0.05 mol/L, pH 5.50)-acetonitrile (55:45 v/v) with 0.1% of triethylamine as mobile phase and UV-vis detection at 214 nm. Among the factorial design conditions evaluated, the best results are obtained at a pH 11.0, temperature of 30 degrees C, and extraction time of 45 min. The proposed method, using a lab-made SPME-LC interface, allowed the determination of tricyclic antidepressants in in plasma at therapeutic concentration levels. PMID:16884589
Masson, Perrine; Alves, Alexessander Couto; Ebbels, Timothy M D; Nicholson, Jeremy K; Want, Elizabeth J
2010-09-15
A series of six protocols were evaluated for UPLC-MS based untargeted metabolic profiling of liver extracts in terms of reproducibility and number of metabolite features obtained. These protocols, designed to extract both polar and nonpolar metabolites, were based on (i) a two stage extraction approach or (ii) a simultaneous extraction in a biphasic mixture, employing different volumes and combinations of extraction and resuspension solvents. A multivariate statistical strategy was developed to allow comparison of the multidimensional variation between the methods. The optimal protocol for profiling both polar and nonpolar metabolites was found to be an aqueous extraction with methanol/water followed by an organic extraction with dichloromethane/methanol, with resuspension of the dried extracts in methanol/water before UPLC-MS analysis. This protocol resulted in a median CV of feature intensities among experimental replicates of <20% for aqueous extracts and <30% for organic extracts. These data demonstrate the robustness of the proposed protocol for extracting metabolites from liver samples and make it well suited for untargeted liver profiling in studies exploring xenobiotic hepatotoxicity and clinical investigations of liver disease. The generic nature of this protocol facilitates its application to other tissues, for example, brain or lung, enhancing its utility in clinical and toxicological studies. PMID:20715759
Rogeberg, Magnus; Vehus, Tore; Grutle, Lene; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa
2013-09-01
The single-run resolving power of current 10 μm id porous-layer open-tubular (PLOT) columns has been optimized. The columns studied had a poly(styrene-co-divinylbenzene) porous layer (~0.75 μm thickness). In contrast to many previous studies that have employed complex plumbing or compromising set-ups, SPE-PLOT-LC-MS was assembled without the use of additional hardware/noncommercial parts, additional valves or sample splitting. A comprehensive study of various flow rates, gradient times, and column length combinations was undertaken. Maximum resolution for <400 bar was achieved using a 40 nL/min flow rate, a 400 min gradient and an 8 m long column. We obtained a 2.3-fold increase in peak capacity compared to previous PLOT studies (950 versus previously obtained 400, when using peak width = 2σ definition). Our system also meets or surpasses peak capacities obtained in recent reports using nano-ultra-performance LC conditions or long silica monolith nanocolumns. Nearly 500 proteins (1958 peptides) could be identified in just one single injection of an extract corresponding to 1000 BxPC3 beta catenin (-/-) cells, and ~1200 and 2500 proteins in extracts of 10,000 and 100,000 cells, respectively, allowing detection of central members and regulators of the Wnt signaling pathway. PMID:23813982
Multidimensional explicit difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Vanleer, B.
1983-01-01
First and second order explicit difference schemes are derived for a three dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind (backward) biased and optimally stable.
Multidimensional explicit difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Van Leer, B.
1984-01-01
First- and second-order explicit difference schemes are derived for a three-dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind biased and optimally stable.
NASA Astrophysics Data System (ADS)
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-01
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7 μg L-1was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-15
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7μgL(-1)was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples. PMID:24835725
NASA Astrophysics Data System (ADS)
Peng, J.; Liu, Q.; Wen, J.; Fan, W.; Dou, B.
2015-12-01
Coarse-resolution satellite albedo products are increasingly applied in geographical researches because of their capability to characterize the spatio-temporal patterns of land surface parameters. In the long-term validation of coarse-resolution satellite products with ground measurements, the scale effect, i.e., the mismatch between point measurement and pixel observation becomes the main challenge, particularly over heterogeneous land surfaces. Recent advances in Wireless Sensor Networks (WSN) technologies offer an opportunity for validation using multi-point observations instead of single-point observation. The difficulty is to ensure the representativeness of the WSN in heterogeneous areas with limited nodes. In this study, the objective is to develop a ground-based spatial sampling strategy through consideration of the historical prior knowledge and avoidance of the information redundancy between different sensor nodes. Taking albedo as an example. First, we derive monthly local maps of albedo from 30-m HJ CCD images a 3-year period. Second, we pick out candidate points from the areas with higher temporal stability which helps to avoid the transition or boundary areas. Then, the representativeness (r) of each candidate point is evaluated through the correlational analysis between the point-specific and area-average time sequence albedo vector. The point with the highest r was noted as the new sensor point. Before electing a new point, the vector component of the selected points should be taken out from the vectors in the following correlational analysis. The selection procedure would be ceased once if the integral representativeness (R) meets the accuracy requirement. Here, the sampling method is adapted to both single-parameter and multi-parameter situations. Finally, it is shown that this sampling method has been effectively worked in the optimized layout of Huailai remote sensing station in China. The coarse resolution pixel covering this station could be
NASA Astrophysics Data System (ADS)
Fridjine, S.; Amlouk, M.
In this study, we define a synthetic parameter: optothermal expansivity as a quantitative guide to evaluating and optimizing both the thermal and the optical performance of PV-T functional materials. The definition of this parameter, ψAB (Amlouk-Boubaker parameter), takes into account the thermal diffusivity and the optical effective absorptivity of the material. The values of this parameter, which seems to be a characteristic one, correspond to the total volume that contains a fixed amount of heat per unit time (m3 s-1) and can be considered as a 3D velocity of the transmitted heat inside the material. As the PV-T combined devices need to have simultaneous optical and thermal efficiency, we try to investigate some recently proposed materials (β-SnS2, In2S3, ZnS1-xSex|0 ≤x<0.5 and Zn-doped thioindate compounds) using the newly established ψAB/Eg abacus.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Zhang, Yan; Liu, Jun W; Zheng, Wen J; Wang, Lei; Zhang, Hong Y; Fang, Guo Z; Wang, Shuo
2008-02-01
In this study, an enzyme-linked immunosorbent assay (ELISA) was optimized and applied to the determination of endosulfan residues in 20 different kinds of food commodities including vegetables, dry fruits, tea and meat. The limit of detection (IC(15)) was 0.8 microg kg(-1) and the sensitivity (IC(50)) was 5.3 microg kg(-1). Three simple extraction methods were developed, including shaking on the rotary shaker at 250 r min(-1) overnight, shaking on the rotary shaker for 1 h and thoroughly mixing for 2 min. Methanol was used as the extraction solvent in this study. The extracts were diluted in 0.5% fish skin gelatin (FG) in phosphate-buffered saline (PBS) at various dilutions in order to remove the matrix interference. For cabbage (purple and green), asparagus, Japanese green, Chinese cabbage, scallion, garland chrysanthemum, spinach and garlic, the extracts were diluted 10-fold; for carrots and tea, the extracts were diluted 15-fold and 900-fold, respectively. The extracts of celery, adzuki beans and chestnuts, were diluted 20-fold to avoid the matrix interference; ginger, vegetable soybean and peanut extracts were diluted 100-fold; mutton and chicken extracts were diluted 10-fold and for eel, the dilution was 40-fold. Average recoveries were 63.13-125.61%. Validation was conducted by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results of this study will be useful to the wide application of an ELISA for the rapid determination of pesticides in food samples. PMID:18246504
NASA Astrophysics Data System (ADS)
Ávila, Akie K.; Araujo, Thiago O.; Couto, Paulo R. G.; Borges, Renata M. H.
2005-10-01
In general, research experimentation is often used mainly when new methodologies are being developed or existing ones are being improved. The characteristics of any method depend on its factors or components. The planning techniques and analysis of experiments are basically used to improve the analytical conditions of methods, to reduce experimental labour with the minimum of tests and to optimize the use of resources (reagents, time of analysis, availability of the equipment, operator time, etc). These techniques are applied by the identification of variables (control factors) of a process that have the most influence on the response of the parameters of interest, by attributing values to the influential variables of the process in order that the variability of response can be minimum, or the obtained value (quality parameter) be very close to the nominal value, and by attributing values to the influential variables of the process so that the effects of uncontrollable variables can be reduced. In this central composite design (CCD), four permanent modifiers (Pd, Ir, W and Rh) and one combined permanent modifier W+Ir were studied. The study selected two factors: pyrolysis and atomization temperatures at five different levels for all the possible combinations. The pyrolysis temperatures with different permanent modifiers varied from 600 °C to 1600 °C with hold times of 25 s, while atomization temperatures ranged between 1900 °C and 2280 °C. The characteristic masses for As were in the range of 31 pg to 81 pg. Assuming the best conditions obtained on CCD, it was possible to estimate the measurement uncertainty of As determination in water samples. The results showed that considering the main uncertainty sources such as the repetitivity of measurement inherent in the equipment, the calibration curve which evaluates the adjustment of the mathematical model to the results and the calibration standards concentrations, the values obtained were similar to international
Results from the NIST-EPA Interagency Agreement on Measurements and Standards in Aerosol Carbon: Sampling Regional PM2.5 for the Chemometric Optimization of Thermal-Optical Analysis Study will be presented at the American Association for Aerosol Research (AAAR) 24th Annual Confer...
ERIC Educational Resources Information Center
Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.
2015-01-01
Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…
Liu Yu; Guo Qiuquan; Nie Hengyong; Lau, W. M.; Yang Jun
2009-12-15
The mechanism of dynamic force modes has been successfully applied to many atomic force microscopy (AFM) applications, such as tapping mode and phase imaging. The high-order flexural vibration modes are recent advancement of AFM dynamic force modes. AFM optical lever detection sensitivity plays a major role in dynamic force modes because it determines the accuracy in mapping surface morphology, distinguishing various tip-surface interactions, and measuring the strength of the tip-surface interactions. In this work, we have analyzed optimization and calibration of the optical lever detection sensitivity for an AFM cantilever-tip ensemble vibrating in high-order flexural modes and simultaneously experiencing a wide range and variety of tip-sample interactions. It is found that the optimal detection sensitivity depends on the vibration mode, the ratio of the force constant of tip-sample interactions to the cantilever stiffness, as well as the incident laser spot size and its location on the cantilever. It is also found that the optimal detection sensitivity is less dependent on the strength of tip-sample interactions for high-order flexural modes relative to the fundamental mode, i.e., tapping mode. When the force constant of tip-sample interactions significantly exceeds the cantilever stiffness, the optimal detection sensitivity occurs only when the laser spot locates at a certain distance from the cantilever-tip end. Thus, in addition to the 'globally optimized detection sensitivity', the 'tip optimized detection sensitivity' is also determined. Finally, we have proposed a calibration method to determine the actual AFM detection sensitivity in high-order flexural vibration modes against the static end-load sensitivity that is obtained traditionally by measuring a force-distance curve on a hard substrate in the contact mode.
NASA Astrophysics Data System (ADS)
Dyck, Tobias; Haas, Stefan
2016-04-01
We present a novel approach to obtaining a quick prediction of a sample's topography after the treatment with direct laser interference patterning (DLIP) . The underlying model uses the parameters of the experimental setup as input, calculates the laser intensity distribution in the interference volume and determines the corresponding heat intake into the material as well as the subsequent heat diffusion within the material. The resulting heat distribution is used to determine the topography of the sample after the DLIP treatment . This output topography is in good agreement with corresponding experiments. The model can be applied in optimization algorithms in which a sample topography needs to be engineered in order to suit the needs of a given device. A prominent example for such an application is the optimization of the light scattering properties of the textured interfaces in a solar cell.
Pay scheme preferences and health policy objectives.
Abelsen, Birgit
2011-04-01
This paper studies the preferences among healthcare workers towards pay schemes involving different levels of risk. It identifies which pay scheme individuals would prefer for themselves, and which they think is best in furthering health policy objectives. The paper adds, methodologically, a way of defining pay schemes that include different levels of risk. A questionnaire was mailed to a random sample of 1111 dentists. Respondents provided information about their current and preferred pay schemes, and indicated which pay scheme, in their opinion, would best further overall health policy objectives. A total of 504 dentists (45%) returned the questionnaire, and there was no indication of systematic non-response bias. All public dentists had a current pay scheme based on a fixed salary and the majority of individuals preferred a pay scheme with more income risk. Their preferred pay schemes coincided with the ones believed to further stabilise healthcare personnel. The predominant current pay scheme among private dentists was based solely on individual output, and the majority of respondents preferred this pay scheme. In addition, their preferred pay schemes coincided with the ones believed to further efficiency objectives. Both public and private dentists believed that pay schemes, furthering efficiency objectives, had to include more performance-related pay than the ones believed to further stability and quality objectives. PMID:20565995
Lamiable, A; Thevenet, P; Tufféry, P
2016-08-01
Hidden Markov Model derived structural alphabets are a probabilistic framework in which the complete conformational space of a peptidic chain is described in terms of probability distributions that can be sampled to identify conformations of largest probabilities. Here, we assess how three strategies to sample sub-optimal conformations-Viterbi k-best, forward backtrack and a taboo sampling approach-can lead to the efficient generation of peptide conformations. We show that the diversity of sampling is essential to compensate biases introduced in the estimates of the probabilities, and we find that only the forward backtrack and a taboo sampling strategies can efficiently generate native or near-native models. Finally, we also find such approaches are as efficient as former protocols, while being one order of magnitude faster, opening the door to the large scale de novo modeling of peptides and mini-proteins. © 2016 Wiley Periodicals, Inc. PMID:27317417
NASA Astrophysics Data System (ADS)
Bayer, Peter; de Paly, Michael; Bürger, Claudius M.
2010-05-01
This study demonstrates the high efficiency of the so-called stack-ordering technique for optimizing a groundwater management problem under uncertain conditions. The uncertainty is expressed by multiple equally probable model representations, such as realizations of hydraulic conductivity. During optimization of a well-layout problem for contaminant control, a ranking mechanism is applied that extracts those realizations that appear most critical for the optimization problem. It is shown that this procedure works well for evolutionary optimization algorithms, which are to some extent robust against noisy objective functions. More precisely, differential evolution (DE) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are applied. Stack ordering is comprehensively investigated for a plume management problem at a hypothetical template site based on parameter values measured at and on a geostatistical model developed for the Lauswiesen study site near Tübingen, Germany. The straightforward procedure yields computational savings above 90% in comparison to always evaluating the full set of realizations. This is confirmed by cross testing with four additional validation cases. The results show that both evolutionary algorithms obtain highly reliable near-optimal solutions. DE appears to be the better choice for cases with significant noise caused by small stack sizes. On the other hand, there seems to be a problem-specific threshold for the evaluation stack size above which the CMA-ES achieves solutions with both better fitness and higher reliability.
Lucchinetti, E; Stüssi, E
2004-01-01
Measuring the elasticity constants of biological materials often sets important constraints, such as the limited size or the irregular geometry of the samples. In this paper, the identification approach as applied to the specific problem of accurately retrieving the material properties of small bone samples from a measured displacement field is discussed. The identification procedure can be formulated as an optimization problem with the goal of minimizing the difference between computed and measured displacements by searching for an appropriate set of material parameters using dedicated algorithms. Alternatively, the backcalculation of the material properties from displacement maps can be implemented using artificial neural networks. In a practical situation, however, measurement errors strongly affect the identification results, calling for robust optimization approaches in order accurately to retrieve the material properties from error-polluted sample deformation maps. Using a simple model problem, the performances of both classical and neural network driven optimization are compared. When performed before the collection of experimental data, this evaluation can be very helpful in pinpointing potential problems with the envisaged experiments such as the need for a sufficient signal-to-noise ratio, particularly important when working with small tissue samples such as specimens cut from rodent bones or single bone trabeculae. PMID:15648663
Lee, Seunggeun; Emond, Mary J; Bamshad, Michael J; Barnes, Kathleen C; Rieder, Mark J; Nickerson, Deborah A; Christiani, David C; Wurfel, Mark M; Lin, Xihong
2012-08-10
We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193
Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong
2012-01-01
We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193
An adaptive additive inflation scheme for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
NASA Astrophysics Data System (ADS)
Claes, Martine; de Bokx, Pieter; Willard, Nico; Veny, Paul; Van Grieken, René
1997-07-01
Grazing emission X-ray fluorescence (GEXRF) is a new development in X-ray fluorescence analysis related to total-reflection XRF. An optical flat carrying the sample is irradiated at an angle of approximately 90° with an uncollimated polychromatic X-ray beam. The emitted fluorescent radiation of the sample elements is measured at very small angles using wavelength dispersive detection. For the application of GEXRF in micro- and trace analysis, a sample preparation procedure for analysis of liquid samples has been developed. Polycarbonate was investigated as a possible material for the sample carrier. Homogeneous distribution of the sample on the support was achieved by special pre-treatment of the carrier. This pre-treatment includes siliconizing the polycarbonate disks with Serva silicone solution, after which the siliconized carriers are placed in an oxygen plasma asher. Finally, to obtain a spot of the same size as the X-ray beam (≈30 mm diameter), a thin silicone layer is placed as a ring on the carriers with an ear pick. Electron microprobe analyses were performed to check the distribution of the liquid sample deposit, and GEXRF measurements were used to check the reproducibility of sample preparation.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E; Kaminski, Clemens F; Szabó, Gábor; Erdélyi, Miklós
2014-03-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Sancho-Parramon, Jordi; Ferré-Borrull, Josep; Bosch, Salvador; Ferrara, Maria Christina
2003-03-01
We present a procedure for the optical characterization of thin-film stacks from spectrophotometric data. The procedure overcomes the intrinsic limitations arising in the numerical determination of many parameters from reflectance or transmittance spectra measurements. The key point is to use all the information available from the manufacturing process in a single global optimization process. The method is illustrated by a case study of solgel applications. PMID:12638889
Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola
2012-01-01
Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range. PMID:22519096
NASA Astrophysics Data System (ADS)
Popa, Mihnea; Roth, Mike
2003-06-01
In this paper we study the relationship between two different compactifications of the space of vector bundle quotients of an arbitrary vector bundle on a curve. One is Grothendieck's Quot scheme, while the other is a moduli space of stable maps to the relative Grassmannian. We establish an essentially optimal upper bound on the dimension of the two compactifications. Based on that, we prove that for an arbitrary vector bundle, the Quot schemes of quotients of large degree are irreducible and generically smooth. We precisely describe all the vector bundles for which the same thing holds in the case of the moduli spaces of stable maps. We show that there are in general no natural morphisms between the two compactifications. Finally, as an application, we obtain new cases of a conjecture on effective base point freeness for pluritheta linear series on moduli spaces of vector bundles.
Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron
2015-01-01
Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343
PICOBIT: A Compact Scheme System for Microcontrollers
NASA Astrophysics Data System (ADS)
St-Amour, Vincent; Feeley, Marc
Due to their tight memory constraints, small microcontroller based embedded systems have traditionally been implemented using low-level languages. This paper shows that the Scheme programming language can also be used for such applications, with less than 7 kB of total memory. We present PICOBIT, a very compact implementation of Scheme suitable for memory constrained embedded systems. To achieve a compact system we have tackled the space issue in three ways: the design of a Scheme compiler generating compact bytecode, a small virtual machine, and an optimizing C compiler suited to the compilation of the virtual machine.
Błażewicz, Anna; Klatka, Maria; Dolliver, Wojciech; Kocjan, Ryszard
2014-07-01
A fast, accurate and precise ion chromatography method with pulsed amperometric detection was applied to evaluate a variety of parameters affecting the determination of total iodine in serum and urine of 81 subjects, including 56 obese and 25 healthy Polish children. The sample pretreatment methods were carried out in a closed system and with the assistance of microwaves. Both alkaline and acidic digestion procedures were developed and optimized to find the simplest combination of reagents and the appropriate parameters for digestion that would allow for the fastest, least time consuming and most cost-effective way of analysis. A good correlation between the certified and the measured concentrations was achieved. The best recoveries (96.8% for urine and 98.8% for serum samples) were achieved using 1ml of 25% tetramethylammonium hydroxide solution within 6min for 0.1ml of serum/urine samples. Using 0.5ml of 65% nitric acid solution the best recovery (95.3%) was obtained when 7min of effective digestion time was used. Freeze-thaw stability and long-term stability were checked. After 24 weeks 14.7% loss of iodine in urine, and 10.9% in serum samples occurred. For urine samples, better correlation (R(2)=0.9891) of various sample preparation procedures (alkaline digestion and application of OnGuard RP cartidges) was obtained. Significantly lower iodide content was found in samples taken from obese children. Serum iodine content in obese children was markedly variable in comparison with the healthy group, whereas the difference was less evident when urine samples were analyzed. The mean content in serum was 59.12±8.86μg/L, and in urine 98.26±25.93 for obese children when samples were prepared by the use of optimized alkaline digestion reinforced by microwaves. In healthy children the mean content in serum was 82.58±6.01μg/L, and in urine 145.76±31.44μg/L. PMID:24911549
Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S
2016-03-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. PMID:26965231
De Rosa, Stephen C.; Martinson, Jeffrey A.; Plants, Jill; Brady, Kirsten E.; Gumbi, Pamela P.; Adams, Devin J.; Vojtech, Lucia; Galloway, Christine G.; Fialkow, Michael; Lentz, Gretchen; Gao, Dayong; Shu, Zhiquan; Nyanga, Billy; Izulla, Preston; Kimani, Joshua; Kimwaki, Steve; Bere, Alfred; Moodie, Zoe; Landay, Alan L.; Passmore, Jo-Ann S.; Kaul, Rupert; Novak, Richard M.; McElrath, M. Juliana; Hladik, Florian
2014-01-01
Background Functional analysis of mononuclear leukocytes in the female genital mucosa is essential for understanding the immunologic effects of HIV vaccines and microbicides at the site of HIV exposure. However, the best female genital tract sampling technique is unclear. Methods and Findings We enrolled women from four sites in Africa and the US to compare three genital leukocyte sampling methods: cervicovaginal lavages (CVL), endocervical cytobrushes, and ectocervical biopsies. Absolute yields of mononuclear leukocyte subpopulations were determined by flow cytometric bead-based cell counting. Of the non-invasive sampling types, two combined sequential cytobrushes yielded significantly more viable mononuclear leukocytes than a CVL (p<0.0001). In a subsequent comparison, two cytobrushes yielded as many leukocytes (∼10,000) as one biopsy, with macrophages/monocytes being more prominent in cytobrushes and T lymphocytes in biopsies. Sample yields were consistent between sites. In a subgroup analysis, we observed significant reproducibility between replicate same-day biopsies (r = 0.89, p = 0.0123). Visible red blood cells in cytobrushes increased leukocyte yields more than three-fold (p = 0.0078), but did not change their subpopulation profile, indicating that these leukocytes were still largely derived from the mucosa and not peripheral blood. We also confirmed that many CD4+ T cells in the female genital tract express the α4β7 integrin, an HIV envelope-binding mucosal homing receptor. Conclusions CVL sampling recovered the lowest number of viable mononuclear leukocytes. Two cervical cytobrushes yielded comparable total numbers of viable leukocytes to one biopsy, but cytobrushes and biopsies were biased toward macrophages and T lymphocytes, respectively. Our study also established the feasibility of obtaining consistent flow cytometric analyses of isolated genital cells from four study sites in the US and Africa. These data represent an important step
Duhaime, Melissa B; Deng, Li; Poulos, Bonnie T; Sullivan, Matthew B
2012-01-01
Metagenomics generates and tests hypotheses about dynamics and mechanistic drivers in wild populations, yet commonly suffers from insufficient (< 1 ng) starting genomic material for sequencing. Current solutions for amplifying sufficient DNA for metagenomics analyses include linear amplification for deep sequencing (LADS), which requires more DNA than is normally available, linker-amplified shotgun libraries (LASLs), which is prohibitively low throughput, and whole-genome amplification, which is significantly biased and thus non-quantitative. Here, we adapt the LASL approach to next generation sequencing by offering an alternate polymerase for challenging samples, developing a more efficient sizing step, integrating a ‘reconditioning PCR’ step to increase yield and minimize late-cycle PCR artefacts, and empirically documenting the quantitative capability of the optimized method with both laboratory isolate and wild community viral DNA. Our optimized linker amplification method requires as little as 1 pg of DNA and is the most precise and accurate available, with G + C content amplification biases less than 1.5-fold, even for complex samples as diverse as a wild virus community. While optimized here for 454 sequencing, this linker amplification method can be used to prepare metagenomics libraries for sequencing with next-generation platforms, including Illumina and Ion Torrent, the first of which we tested and present data for here. PMID:22713159
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert
2015-06-01
Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Al-Ansari, Ahmed M; Saleem, Ammar; Kimpe, Linda E; Trudeau, Vance L; Blais, Jules M
2011-11-15
The purpose of this study was to develop an optimized method for the extraction and determination of 17α-ethinylestradiol (EE2) and estrone (E1) in whole fish tissues at ng/g levels. The optimized procedure for sample preparation includes extraction of tissue by accelerated solvent extraction (ASE-200), lipid removal by gel permeation chromatography (GPC), and a cleanup step by acetonitrile precipitation followed by a hexane wash. Analysis was performed by gas chromatography/mass spectrometry (GC/MS) in negative chemical ionization (NCI) mode after samples were derivatized with pentafluorobenzoyl chloride (PFBCl). The method was developed using high lipid content wild fish that were exposed to the tested analytes. The whole procedure recoveries ranged from 74.5 to 93.7% with relative standard deviation (RSD) of 2.3-6.2% for EE2 and 64.8 to 91.6% with RSD of 9.46-0.18% for E1. The method detection limits were 0.67 ng/g for EE2 and 0.68 ng/g for E1 dry weight. The method was applied to determine EE2 levels in male goldfish (Carrasius auratus) after a 72 h dietary exposure. All samples contained EE2 averaging 1.7ng/g (±0.29 standard deviation, n=5). This is the first optimized protocol for EE2 extraction from whole fish tissue at environmentally relevant concentrations. Due to high sensitivity and recovery, the developed method will improve our knowledge about the environmental fate and uptake of synthetic steroidal estrogens in fish populations. PMID:21982913
NASA Astrophysics Data System (ADS)
Metzger, Stefan; Burba, George; Burns, Sean P.; Blanken, Peter D.; Li, Jiahong; Luo, Hongyan; Zulueta, Rommel C.
2016-03-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this
Angerer, Tina B; Mohammadi, Amir Saeid; Fletcher, John S
2016-06-01
Lipidomics has been an expanding field since researchers began to recognize the signaling functions of lipids and their involvement in disease. Time-of-flight secondary ion mass spectrometry is a valuable tool for studying the distribution of a wide range of lipids in multiple brain regions, but in order to make valuable scientific contributions, one has to be aware of the influence that sample treatment can have on the results. In this article, the authors discuss different sample treatment protocols for rodent brain sections focusing on signal from the hippocampus and surrounding areas. The authors compare frozen hydrated analysis to freeze drying, which is the standard in most research facilities, and reactive vapor exposure (trifluoroacetic acid and NH3). The results show that in order to preserve brain chemistry close to a native state, frozen hydrated analysis is the most suitable, but execution can be difficult. Freeze drying is prone to produce artifacts as cholesterol migrates to surface, masking other signals. This effect can be partially reversed by exposing freeze dried sections to reactive vapor. When analyzing brain sections in negative ion mode, exposing those sections to NH3 vapor can re-establish the diversity in lipid signal found in frozen hydrated analyzed sections. This is accomplished by removing cholesterol and uncovering sulfatide signals, allowing more anatomical regions to be visualized. PMID:26856332
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure’s accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children’s mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673
Hu, Yuming; Chen, Shuo; Chen, Jitao; Liu, Guozhu; Chen, Bo; Yao, Shouzhuo
2012-10-01
A high-performance liquid chromatographic method coupled with electrospray mass spectrometry was developed for the simultaneous determination of dolasetron and its major metabolite, hydrodolasetron, in human plasma. A new sample pretreatment method, i.e., salt induced phase separation extraction (SIPSE), was proposed and compared with four other methods, i.e., albumin precipitation, liquid-liquid extraction, hydrophobic solvent-induced phase separation extraction and subzero-temperature induced phase separation extraction. Among these methods, SIPSE showed the highest extraction efficiency and the lowest matrix interferences. The extraction recoveries obtained from the SIPSE method were all more than 96% for dolasetron, hydrodolasetron and ondansetron (internal standard). The SIPSE method is also very fast and easy because protein precipitation, analyte extraction and sample cleanup are combined into one simple process by mixing acetonitrile with plasma and partitioning with 2 mol/L sodium carbonate aqueous solution. The correlation coefficients of the calibration curves were all more than 0.997, in the range of 7.9-4750.0 ng/mL and 4.8-2855.1 ng/mL for dolasetron and hydrodolasetron, respectively. The limits of quantification were 7.9 and 4.8 ng/mL for dolasetron and hydrodolasetron, respectively. The intra-day and inter-day repeatability were all less than 10%. The method was successfully applied to the pharmacokinetic study of dolasetron. PMID:22645289
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673
Wen, Xin-Xin; Zong, Chun-Lin; Xu, Chao; Ma, Xiang-Yu; Wang, Fa-Qi; Feng, Ya-Fei; Yan, Ya-Bo; Lei, Wei
2015-01-01
Trabecular bones of different skeletal sites have different bone morphologies. How to select an appropriate volume of region of interest (ROI) to reflect the microarchitecture of trabecular bone in different skeletal sites was an interesting problem. Therefore, in this study, the optimal volumes of ROI within vertebral body and femoral head, and if the relationships between volumes of ROI and microarchitectural parameters were affected by trabecular bone morphology were studied. Within vertebral body and femoral head, different cubic volumes of ROI (from (1 mm)3 to (20 mm)3) were set to compare with control groups(whole volume of trabecular bone). Five microarchitectural parameters (BV/TV, Tb.N, Tb.Th, Tb.Sp, and BS/BV) were obtained. Nonlinear curve fitting functions were used to explore the relationships between the microarchitectural parameters and the volumes of ROI. The volumes of ROI could affect the microarchitectural parameters when the volume was smaller than (8 mm)3 within the vertebral body and smaller than (13 mm)3 within the femoral head. As the volume increased, the variable tendencies of BV/TV, Tb.N, and Tb.Sp were different between these two skeletal sites. The curve fitting functions between these two sites were also different. The relationships between volumes of ROI and microarchitectural parameters were affected by the different trabecular bone morphologies within lumbar vertebral body and femoral head. When depicting the microarchitecture of human trabecular bone within lumbar vertebral body and femoral head, the volume of ROI would be larger than (8 mm)3 and (13 mm)3. PMID:26770381
Optimization of a gas sampling system for measuring eddy-covariance fluxes of H2O and CO2
NASA Astrophysics Data System (ADS)
Metzger, S.; Burba, G.; Burns, S. P.; Blanken, P. D.; Li, J.; Luo, H.; Zulueta, R. C.
2015-10-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) will provide the ability of unbiased ecological inference across eco-climatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analysers are widely employed for eddy-covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties, and requires correction. Here, we show that the gas sampling system substantially contributes to high-frequency attenuation, which can be minimized by careful design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyser cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh
Puscasu, Silvia; Aubin, Simon; Cloutier, Yves; Sarazin, Philippe; Tra, Huu V; Gagné, Sébastien
2015-04-01
4,4-methylene diphenyl diisocyanate (MDI) aerosol exposure evaluation in spray foam insulation application is known as being a challenge because the spray foam application actually involves a fast-curing process. Available techniques are either not user-friendly or are inaccurate or not validated for this application. To address these issues, a new approach using a CIP10M was developed to appropriately collect MDI aerosol in spray foam insulation while being suitable for personal sampling. The CIP10M is a commercially available personal aerosol sampler that has been validated for the collection of microbial spores into a liquid medium. Tributylphosphate with 1-(2-methoxyphenyl)piperazine (MOPIP) was introduced into the CIP10M to collect and stabilize the MDI aerosols. The limit of detection and limit of quantification of the method were 0.007 and 0.024 μg ml(-1), respectively. The dynamic range was from 0.024 to 0.787 μg ml(-1) (with R (2) ≥ 0.990), which corresponds to concentrations in the air from 0.04 to 1.3 µg m(-3), assuming 60 min of sampling at 10 l min(-1). The intraday and interday analytical precisions were <2% for all of the concentration levels tested, and the accuracy was within an appropriate range of 98 ± 1%. No matrix effect was observed, and a total recovery of 99% was obtained. Parallel sampling was performed in a real MDI foam spraying environment with a CIP10M and impingers containing toluene/MOPIP (reference method). The results obtained show that the CIP10M provides levels of MDI monomer in the same range as the impingers, and higher levels of MDI oligomers. The negative bias observed for MDI monomer was between 2 and 26%, whereas the positive bias observed for MDI oligomers was between 76 and 113%, with both biases calculated with a confidence level of 95%. The CIP10M seems to be a promising approach for MDI aerosol exposure evaluation in spray foam applications. PMID:25452291
Lopes Dos Santos, Walter Nei; Macedo, Samuel Marques; Teixeira da Rocha, Sofia Negreiros; Souza de Jesus, Caio Niela; Cavalcante, Dannuza Dias; Hatje, Vanessa
2014-08-01
This work proposes a procedure for the determination of total selenium content in shellfish after digestion of samples in block using cold finger system and detection using atomic fluorescent spectrometry coupled hydride generation (HG AFS). The optimal conditions for HG such as effect and volume of prereduction KBr 10 % (m/v) (1.0 and 2.0 ml) and concentration of hydrochloric acid (3.0 and 6.0 mol L(-1)) were evaluated. The best results were obtained using 3 mL of HCl (6 mol L(-1)) and 1 mL of KBr 10 % (m/v), followed by 30 min of prereduction for the volume of 1 mL of the digested sample. The precision and accuracy were assessed by the analysis of the Certified Reference Material NIST 1566b. Under the optimized conditions, the detection and quantification limits were 6.06 and 21.21 μg kg(-1), respectively. The developed method was applied to samples of shellfish (oysters, clams, and mussels) collected at Todos os Santos Bay, Bahia, Brazil. Selenium concentrations ranged from 0.23 ± 0.02 to 3.70 ± 0.27 mg kg(-1) for Mytella guyanensis and Anomalocardia brasiliana, respectively. The developed method proved to be accurate, precise, cheap, fast, and could be used for monitoring Se in shellfish samples. PMID:24771464
A classification scheme for risk assessment methods.
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report
NASA Astrophysics Data System (ADS)
Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.
2011-10-01
A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended. PMID:27485623
SU-C-207-03: Optimization of a Collimator-Based Sparse Sampling Technique for Low-Dose Cone-Beam CT
Lee, T; Cho, S; Kim, I; Han, B
2015-06-15
Purpose: In computed tomography (CT) imaging, radiation dose delivered to the patient is one of the major concerns. Sparse-view CT takes projections at sparser view angles and provides a viable option to reducing dose. However, a fast power switching of an X-ray tube, which is needed for the sparse-view sampling, can be challenging in many CT systems. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. In this study, we investigated the effects of collimator parameters on the image quality and aimed to optimize the collimator design. Methods: We used a bench-top circular cone-beam CT system together with a CatPhan600 phantom, and took 1440 projections from a single rotation. The multi-slit collimator made of tungsten was mounted on the X-ray source for beam blocking. For image reconstruction, we used a total-variation minimization (TV) algorithm and modified the backprojection step so that only the measured data through the collimator slits are to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. We also analyzed the sampling efficiency: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/ro. Results: Consistent results in the image quality have been produced with the sampling efficiency, and the optimum condition was found to be using 12 slits at 30 Hz/ro. As image quality indices, we used the CNR and the detectability. Conclusion: We conducted an experiment with a moving multi-slit collimator to realize a sparse-sampled cone-beam CT. Effects of collimator parameters on the image quality have been systematically investigated, and the optimum condition has been reached.
Noubary, Farzad; Coonahan, Erin; Schoeplein, Ryan; Baden, Rachel; Curry, Michael; Afdhal, Nezam; Kumar, Shailendra; Pollock, Nira R.
2015-01-01
Background A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT) in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress) as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time. Methods 96 ambulatory patients with varied baseline ALT concentration underwent fingerstick testing using the paper device; cell phone images of completed devices were taken and texted to a blinded off-site reader. Venipuncture serum was obtained from 93/96 participants for routine clinical testing (Roche/Hitachi); subsequently, 88/93 serum samples were captured and applied to paper and Abaxis platforms. Paper test and reference standard results were compared by Bland-Altman analysis. Findings For serum, there was excellent agreement between paper test and Abaxis results, with negligible bias (+4.5 U/L). Abaxis results were systematically 8.6% lower than Roche/Hitachi results. ALT values in fingerstick samples tested on paper were systematically lower than values in paired serum tested on paper (bias -23.6 U/L) or Abaxis (bias -18.4 U/L); a correction factor was developed for the paper device to match fingerstick blood to serum. Visual reads of cell phone images closely matched reads made in real time (bias +5.5 U/L). Conclusions The paper ALT test is highly accurate for serum testing, matching the reference method against which it was optimized better than the reference methods matched each other. A systematic difference exists between ALT values in fingerstick and paired
Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu
2014-10-15
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
Indirect visual cryptography scheme
NASA Astrophysics Data System (ADS)
Yang, Xiubo; Li, Tuo; Shi, Yishi
2015-10-01
Visual cryptography (VC), a new cryptographic scheme for image. Here in encryption, image with message is encoded to be N sub-images and any K sub-images can decode the message in a special rules (N>=2, 2<=K<=N). Then any K of the N sub-images are printed on transparency and stacked exactly, the message of original image will be decrypted by human visual system, but any K-1 of them get no information about it. This cryptographic scheme can decode concealed images without any cryptographic computations, and it has high security. But this scheme lacks of hidden because of obvious feature of sub-images. In this paper, we introduce indirect visual cryptography scheme (IVCS), which encodes sub-images to be pure phase images without visible strength based on encoding of visual cryptography. The pure phase image is final ciphertexts. Indirect visual cryptography scheme not only inherits the merits of visual cryptography, but also raises indirection, hidden and security. Meanwhile, the accuracy alignment is not required any more, which leads to the strong anti-interference capacity and robust in this scheme. System of decryption can be integrated highly and operated conveniently, and its process of decryption is dynamic and fast, which all lead to the good potentials in practices.
Barreto, R P; Albuquerque, F C; Netto, Annibal D Pereira
2007-09-01
A method for determination of nitrated polycyclic aromatic hydrocarbons (NPAHs) in diesel soot by high-performance liquid chromatography-mass spectrometry with atmospheric pressure chemical ionization (APCI) and detection by ion-trap following ultrasonic extraction is described. The determination of 1-nitropyrene that it is the predominant NPAH in diesel soot was emphasized. Vaporization and drying temperatures of the APCI interface, electronic parameters of the MS detector and the analytical conditions in reversed-phase HPLC were optimized. The patterns of fragmentation of representative NPAHs were evaluated by single and multiple fragmentation steps and negative ionization led to the largest signals. The transition (247-->217) was employed for quantitative analysis of 1-nitropyrene. Calibration curves were linear between 1 and 15 microgL(-1) with correlation coefficients better than 0.999. Typical detection limit (DL) of 0.2 microgL(-1) was obtained. Samples of diesel soot and of the reference material (SRM-2975, NIST, USA) were extracted with methylene chloride. Recoveries were estimated by analysis of SRM 2975 and were between 82 and 105%. DL for 1-nitropyrene was better than 1.5 mg kg(-1), but the inclusion of an evaporation step in the sample processing procedure lowered the DL. The application of the method to diesel soot samples from bench motors showed levels
Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David
2016-01-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from v