An Optimization-Based Sampling Scheme for Phylogenetic Trees
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Sampling scheme optimization for diffuse optical tomography based on data and image space rankings
NASA Astrophysics Data System (ADS)
Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong
2016-10-01
We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
NASA Astrophysics Data System (ADS)
Yan, Hongyong; Yang, Lei; Li, Xiang-Yang
2016-12-01
High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.
Optimal probabilistic dense coding schemes
NASA Astrophysics Data System (ADS)
Kögler, Roger A.; Neves, Leonardo
2017-04-01
Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.
Interpolation-Free Scanning And Sampling Scheme For Tomographic Reconstructions
NASA Astrophysics Data System (ADS)
Donohue, K. D.; Saniie, J.
1987-01-01
In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
An optimized spectral difference scheme for CAA problems
NASA Astrophysics Data System (ADS)
Gao, Junhui; Yang, Zhigang; Li, Xiaodong
2012-05-01
In the implementation of spectral difference (SD) method, the conserved variables at the flux points are calculated from the solution points using extrapolation or interpolation schemes. The errors incurred in using extrapolation and interpolation would result in instability. On the other hand, the difference between the left and right conserved variables at the edge interface will introduce dissipation to the SD method when applying a Riemann solver to compute the flux at the element interface. In this paper, an optimization of the extrapolation and interpolation schemes for the fourth order SD method on quadrilateral element is carried out in the wavenumber space through minimizing their dispersion error over a selected band of wavenumbers. The optimized coefficients of the extrapolation and interpolation are presented. And the dispersion error of the original and optimized schemes is plotted and compared. An improvement of the dispersion error over the resolvable wavenumber range of SD method is obtained. The stability of the optimized fourth order SD scheme is analyzed. It is found that the stability of the 4th order scheme with Chebyshev-Gauss-Lobatto flux points, which is originally weakly unstable, has been improved through the optimization. The weak instability is eliminated completely if an additional second order filter is applied on selected flux points. One and two dimensional linear wave propagation analyses are carried out for the optimized scheme. It is found that in the resolvable wavenumber range the new SD scheme is less dispersive and less dissipative than the original scheme, and the new scheme is less anisotropic for 2D wave propagation. The optimized SD solver is validated with four computational aeroacoustics (CAA) workshop benchmark problems. The numerical results with optimized schemes agree much better with the analytical data than those with the original schemes.
Optimal Symmetric Ternary Quantum Encryption Schemes
NASA Astrophysics Data System (ADS)
Wang, Yu-qi; She, Kun; Huang, Ru-fen; Ouyang, Zhong
2016-11-01
In this paper, we present two definitions of the orthogonality and orthogonal rate of an encryption operator, and we provide a verification process for the former. Then, four improved ternary quantum encryption schemes are constructed. Compared with Scheme 1 (see Section 2.3), these four schemes demonstrate significant improvements in term of calculation and execution efficiency. Especially, under the premise of the orthogonal rate ɛ as secure parameter, Scheme 3 (see Section 4.1) shows the highest level of security among them. Through custom interpolation functions, the ternary secret key source, which is composed of the digits 0, 1 and 2, is constructed. Finally, we discuss the security of both the ternary encryption operator and the secret key source, and both of them show a high level of security and high performance in execution efficiency.
Effects of sparse sampling schemes on image quality in low-dose CT
Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena
2013-11-15
Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
A continuous sampling scheme for edge illumination x-ray phase contrast imaging
NASA Astrophysics Data System (ADS)
Hagen, C. K.; Coan, P.; Bravin, A.; Olivo, A.; Diemoz, P. C.
2015-08-01
We discuss an alternative acquisition scheme for edge illumination (EI) x-ray phase contrast imaging based on a continuous scan of the object and compare its performance to that of a previously used scheme, which involved scanning the object in discrete steps rather than continuously. By simulating signals for both continuous and discrete methods under realistic experimental conditions, the effect of the spatial sampling rate is analysed with respect to metrics such as image contrast and accuracy of the retrieved phase shift. Experimental results confirm the theoretical predictions. Despite being limited to a specific example, the results indicate that continuous schemes present advantageous features compared to discrete ones. Not only can they be used to speed up the acquisition but they also prove superior in terms of accurate phase retrieval. The theory and experimental results provided in this study will guide the design of future EI experiments through the implementation of optimized acquisition schemes and sampling rates.
Global search acceleration in the nested optimization scheme
NASA Astrophysics Data System (ADS)
Grishagin, Vladimir A.; Israfilov, Ruslan A.
2016-06-01
Multidimensional unconstrained global optimization problem with objective function under Lipschitz condition is considered. For solving this problem the dimensionality reduction approach on the base of the nested optimization scheme is used. This scheme reduces initial multidimensional problem to a family of one-dimensional subproblems being Lipschitzian as well and thus allows applying univariate methods for the execution of multidimensional optimization. For two well-known one-dimensional methods of Lipschitz optimization the modifications providing the acceleration of the search process in the situation when the objective function is continuously differentiable in a vicinity of the global minimum are considered and compared. Results of computational experiments on conventional test class of multiextremal functions confirm efficiency of the modified methods.
Attributes mode sampling schemes for international material accountancy verification
Sanborn, J.B.
1982-12-01
This paper addresses the question of detecting falsifications in material balance accountancy reporting by comparing independently measured values to the declared values of a randomly selected sample of items in the material balance. A two-level strategy is considered, consisting of a relatively large number of measurements made at low accuracy, and a smaller number of measurements made at high accuracy. Sampling schemes for both types of measurements are derived, and rigorous proofs supplied that guarantee desired detection probabilities. Sample sizes derived using these methods are sometimes considerably smaller than those calculated previously.
Powered-descent trajectory optimization scheme for Mars landing
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Li, Shihua; Chen, Xisong; Guo, Lei
2013-12-01
This paper presents a trajectory optimization scheme for powered-descent phase of Mars landing with considerations of disturbance. Firstly, θ-D method is applied to design a suboptimal control law with descent model in the absence of disturbance. Secondly, disturbance is estimated by disturbance observer, and the disturbance estimation is as feedforward compensation. Then, semi-global stability analysis of the composite controller consisting of the nonlinear suboptimal controller and the disturbance feedforward compensation is proposed. Finally, to verify the effectiveness of proposed control scheme, an application including relevant simulations on a Mars landing mission is demonstrated.
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
Spatial location weighted optimization scheme for DC optical tomography.
Zhou, Jun; Bai, Jing; He, Ping
2003-01-27
In this paper, a spatial location weighted gradient-based optimization scheme for reducing the computation burden and increasing the reconstruction precision is stated. The method applies to DC diffusionbased optical tomography, where otherwise the reconstruction suffers slow convergence. The inverse approach employs a weighted steepest descent method combined with a conjugate gradient method. A reverse differentiation method is used to efficiently derive the gradient. The reconstruction results confirm that the spatial location weighted optimization method offers a more efficient approach to the DC optical imaging problem than unweighted method does.
Optimizing passive acoustic sampling of bats in forests
Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K
2014-01-01
Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km2 scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363
Towards optimal sampling schedules for integral pumping tests
NASA Astrophysics Data System (ADS)
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
Towards optimal sampling schedules for integral pumping tests.
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations C(av) and mass flow rates M(CP). Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the C(av) estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
A new configurational bias scheme for sampling supramolecular structures
De Gernier, Robin; Mognetti, Bortolo M.; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.
2014-12-28
We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.
Sampling and reconstruction schemes for biomagnetic sensor arrays.
Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio
2002-09-21
In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.
NOTE: Sampling and reconstruction schemes for biomagnetic sensor arrays
NASA Astrophysics Data System (ADS)
Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio
2002-09-01
In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.
Noise and Nonlinear Estimation with Optimal Schemes in DTI
Özcan, Alpay
2010-01-01
In general, the estimation of the diffusion properties for diffusion tensor experiments (DTI) is accomplished via least squares estimation (LSE). The technique requires applying the logarithm to the measurements, which causes bad propagation of errors. Moreover, the way noise is injected to the equations invalidates the least squares estimate as the best linear unbiased estimate. Nonlinear estimation (NE), despite its longer computation time, does not possess any of these problems. However, all of the conditions and optimization methods developed in the past are based on the coefficient matrix obtained in a LSE setup. In this manuscript, nonlinear estimation for DTI is analyzed to demonstrate that any result obtained relatively easily in a linear algebra setup about the coefficient matrix can be applied to the more complicated NE framework. The data, obtained earlier using non–optimal and optimized diffusion gradient schemes, are processed with NE. In comparison with LSE, the results show significant improvements, especially for the optimization criterion. However, NE does not resolve the existing conflicts and ambiguities displayed with LSE methods. PMID:20655681
Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil
2015-03-17
An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.
An optimal performance control scheme for a 3D crane
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Husain, A. R.; Tokhi, M. O.
2016-01-01
This paper presents an optimal performance control scheme for control of a three dimensional (3D) crane system including a Zero Vibration shaper which considers two control objectives concurrently. The control objectives are fast and accurate positioning of a trolley and minimum sway of a payload. A complete mathematical model of a lab-scaled 3D crane is simulated in Simulink. With a specific cost function the proposed controller is designed to cater both control objectives similar to a skilled operator. Simulation and experimental studies on a 3D crane show that the proposed controller has better performance as compared to a sequentially tuned PID-PID anti swing controller. The controller provides better position response with satisfactory payload sway in both rail and trolley responses. Experiments with different payloads and cable lengths show that the proposed controller is robust to changes in payload with satisfactory responses.
Diffusion spectrum MRI using body-centered-cubic and half-sphere sampling schemes.
Kuo, Li-Wei; Chiang, Wen-Yang; Yeh, Fang-Cheng; Wedeen, Van Jay; Tseng, Wen-Yih Isaac
2013-01-15
The optimum sequence parameters of diffusion spectrum MRI (DSI) on clinical scanners were investigated previously. However, the scan time of approximately 30 min is still too long for patient studies. Additionally, relatively large sampling interval in the diffusion-encoding space may cause aliasing artifact in the probability density function when Fourier transform is undertaken, leading to estimation error in fiber orientations. Therefore, this study proposed a non-Cartesian sampling scheme, body-centered-cubic (BCC), to avoid the aliasing artifact as compared to the conventional Cartesian grid sampling scheme (GRID). Furthermore, the accuracy of DSI with the use of half-sphere sampling schemes, i.e. GRID102 and BCC91, was investigated by comparing to their full-sphere sampling schemes, GRID203 and BCC181, respectively. In results, smaller deviation angle and lower angular dispersion were obtained by using the BCC sampling scheme. The half-sphere sampling schemes yielded angular precision and accuracy comparable to the full-sphere sampling schemes. The optimum b(max) was approximately 4750 s/mm(2) for GRID and 4500 s/mm(2) for BCC. In conclusion, the BCC sampling scheme could be implemented as a useful alternative to the GRID sampling scheme. Combination of BCC and half-sphere sampling schemes, that is BCC91, may potentially reduce the scan time of DSI from 30 min to approximately 14 min while maintaining its precision and accuracy.
Initial data sampling in design optimization
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Terry H.
2011-06-01
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
A global optimization algorithm for simulation-based problems via the extended DIRECT scheme
NASA Astrophysics Data System (ADS)
Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang
2015-11-01
This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.
2014-12-01
Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
NASA Astrophysics Data System (ADS)
Li, Y.; Han, B.; Métivier, L.; Brossier, R.
2016-09-01
We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions.
Quantum Optimal Multiple Assignment Scheme for Realizing General Access Structure of Secret Sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
The multiple assignment scheme is to assign one or more shares to single participant so that any kind of access structure can be realized by classical secret sharing schemes. We propose its quantum version including ramp secret sharing schemes. Then we propose an integer optimization approach to minimize the average share size.
Optimization of light collection scheme for forward hadronic calorimeter for STAR experiment at RHIC
NASA Astrophysics Data System (ADS)
Sergeeva, Maria
2013-10-01
We present the results of the optimization of a light collection scheme for a prototype of a sampling compensated hadronic calorimeter for upgrade of the STAR detector at RHIC (BNL). The absolute light yield and uniformity of light collection were measured with the full scale calorimeter tower for different types of reflecting materials, realistic mechanical tolerances for tower assembly and type of coupling between WLS bars and photo detectors. Measurements were performed with conventional PMTs and silicone photo multipliers. The results of these measurements were used to evaluate the influence of the optical collection scheme on the response of the calorimeter using GEANT4 MC. A large prototype of this calorimeter is presently under construction with the beam test scheduled early next year at FNAL.
Optimal design of a hybridization scheme with a fuel cell using genetic optimization
NASA Astrophysics Data System (ADS)
Rodriguez, Marco A.
Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated
An optimized quantum information splitting scheme with multiple controllers
NASA Astrophysics Data System (ADS)
Jiang, Min
2016-12-01
We propose an efficient scheme for splitting multi-qudit information with cooperative control of multiple agents. Each controller is assigned one controlling qudit, and he can monitor the state sharing of all multi-qudit information. Compared with the existing schemes, our scheme requires less resource consumption and approaches higher communication efficiency. In addition, our proposal involves only generalized Bell-state measurement, single-qudit measurement, one-qudit gates and a unitary-reduction operation, which makes it flexible and achievable for physical implementation.
Design of optimally smoothing multistage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Lee, Wen-Tzong; Roe, Philip L.; Powell, Kenneth G.; Tai, Chang-Hsien
1992-01-01
A recently derived local preconditioning of the Euler equations is shown to be useful in developing multistage schemes suited for multigrid use. The effect of the preconditioning matrix on the spatial Euler operator is to equalize the characteristic speeds. When applied to the discretized Euler equations, the preconditioning has the effect of strongly clustering the operator's eigenvalues in the complex plane. This makes possible the development of explicit marching schemes that effectively damp most high-frequency Fourier modes, as desired in multigrid applications. The technique is the same as developed earlier for scalar convection schemes: placement of the zeros of the amplification factor of the multistage scheme in locations where eigenvalues corresponding to high-frequency modes abound.
Optimization schemes for the inversion of Bouguer gravity anomalies
NASA Astrophysics Data System (ADS)
Zamora, Azucena
associated with structural changes [16]; therefore, it complements those geophysical methods with the same depth resolution that sample a different physical property (e.g. electromagnetic surveys sampling electric conductivity) or even those with different depth resolution sampling an alternative physical property (e.g. large scale seismic reflection surveys imaging the crust and top upper mantle using seismic velocity fields). In order to improve the resolution of Bouguer gravity anomalies, and reduce their ambiguity and uncertainty for the modeling of the shallow crust, we propose the implementation of primal-dual interior point methods for the optimization of density structure models through the introduction of physical constraints for transitional areas obtained from previously acquired geophysical data sets. This dissertation presents in Chapter 2 an initial forward model implementation for the calculation of Bouguer gravity anomalies in the Porphyry Copper-Molybdenum (Cu-Mo) Copper Flat Mine region located in Sierra County, New Mexico. In Chapter 3, we present a constrained optimization framework (using interior-point methods) for the inversion of 2-D models of Earth structures delineating density contrasts of anomalous bodies in uniform regions and/or boundaries between layers in layered environments. We implement the proposed algorithm using three different synthetic gravitational data sets with varying complexity. Specifically, we improve the 2-dimensional density structure models by getting rid of unacceptable solutions (geologically unfeasible models or those not satisfying the required constraints) given the reduction of the solution space. Chapter 4 shows the results from the implementation of our algorithm for the inversion of gravitational data obtained from the area surrounding the Porphyry Cu-Mo Cooper Flat Mine in Sierra County, NM. Information obtained from previous induced polarization surveys and core samples served as physical constraints for the
Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI
Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid
2017-01-01
Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry.
Sampling scheme for pyrethroids on multiple surfaces on commercial aircrafts
MOHAN, KRISHNAN R.; WEISEL, CLIFFORD P.
2015-01-01
A wipe sampler for the collection of permethrin from soft and hard surfaces has been developed for use in aircraft. “Disinsection” or application of pesticides, predominantly pyrethrods, inside commercial aircraft is routinely required by some countries and is done on an as-needed basis by airlines resulting in potential pesticide dermal and inhalation exposures to the crew and passengers. A wipe method using filter paper and water was evaluated for both soft and hard aircraft surfaces. Permethrin was analyzed by GC/MS after its ultrasonication extraction from the sampling medium into hexane and volume reduction. Recoveries, based on spraying known levels of permethrin, were 80–100% from table trays, seat handles and rugs; and 40–50% from seat cushions. The wipe sampler is easy to use, requires minimum training, is compatible with the regulations on what can be brought through security for use on commercial aircraft, and readily adaptable for use in residential and other settings. PMID:19756041
Effect of different sampling schemes on the spatial placement of conservation reserves in Utah, USA
Bassett, S.D.; Edwards, T.C.
2003-01-01
We evaluated the effect of three different sampling schemes used to organize spatially explicit biological information had on the spatial placement of conservation reserves in Utah, USA. The three sampling schemes consisted of a hexagon representation developed by the EPA/EMAP program (statistical basis), watershed boundaries (ecological), and the current county boundaries of Utah (socio-political). Four decision criteria were used to estimate effects, including amount of area, length of edge, lowest number of contiguous reserves, and greatest number of terrestrial vertebrate species covered. A fifth evaluation criterion was the effect each sampling scheme had on the ability of the modeled conservation reserves to cover the six major ecoregions found in Utah. Of the three sampling schemes, county boundaries covered the greatest number of species, but also created the longest length of edge and greatest number of reserves. Watersheds maximized species coverage using the least amount of area. Hexagons and watersheds provide the least amount of edge and fewest number of reserves. Although there were differences in area, edge and number of reserves among the sampling schemes, all three schemes covered all the major ecoregions in Utah and their inclusive biodiversity. ?? 2003 Elsevier Science Ltd. All rights reserved.
Resource optimization scheme for multimedia-enabled wireless mesh networks.
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young
2014-08-08
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.
Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young
2014-01-01
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Constrained optimization schemes for geophysical inversion of seismic data
NASA Astrophysics Data System (ADS)
Sosa Aguirre, Uram Anibal
Many experimental techniques in geophysics advance the understanding of Earth processes by estimating and interpreting Earth structure (e.g., velocity and/or density structure). These techniques use different types of geophysical data which can be collected and analyzed separately, sometimes resulting in inconsistent models of the Earth depending on data quality, methods and assumptions made. This dissertation presents two approaches for geophysical inversion of seismic data based on constrained optimization. In one approach we expand a one dimensional (1-D) joint inversion least-squares (LSQ) algorithm by introducing a constrained optimization methodology. Then we use the 1-D inversion results to produce 3-D Earth velocity structure models. In the second approach, we provide a unified constrained optimization framework for solving a 1-D inverse wave propagation problem. In Chapter 2 we present a constrained optimization framework for joint inversion. This framework characterizes 1-D Earth's structure by using seismic shear wave velocities as a model parameter. We create two geophysical synthetic data sets sensitive to shear velocities, namely receiver function and surface wave dispersion. We validate our approach by comparing our numerical results with a traditional unconstrained method, and also we test our approach robustness in the presence of noise. Chapter 3 extends this framework to include an interpolation technique for creating 3-D Earth velocity structure models of the Rio Grande Rift region. Chapter 5 introduces the joint inversion of multiple data sets by adding delay travel times information in a synthetic setup, and leave the posibility to include more data sets. Finally, in Chapter 4 we pose a 1-D inverse full-waveform propagation problem as a PDE-constrained optimization program, where we invert for the material properties in terms of shear wave velocities throughout the physical domain. We facilitate the implementation and comparison of different
Design of optimally smoothing multi-stage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Tai, Chang-Hsien; Powell, Kenneth G.
1989-01-01
In this paper, a method is developed for designing multi-stage schemes that give optimal damping of high-frequencies for a given spatial-differencing operator. The objective of the method is to design schemes that combine well with multi-grid acceleration. The schemes are tested on a nonlinear scalar equation, and compared to Runge-Kutta schemes with the maximum stable time-step. The optimally smoothing schemes perform better than the Runge-Kutta schemes, even on a single grid. The analysis is extended to the Euler equations in one space-dimension by use of 'characteristic time-stepping', which preconditions the equations, removing stiffness due to variations among characteristic speeds. Convergence rates independent of the number of cells in the finest grid are achieved for transonic flow with and without a shock. Characteristic time-stepping is shown to be preferable to local time-stepping, although use of the optimally damping schemes appears to enhance the performance of local time-stepping. The extension of the analysis to the two-dimensional Euler equations is hampered by the lack of a model for characteristic time-stepping in two dimensions. Some results for local time-stepping are presented.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... measurements resulting from this sampling scheme. 761.316 Section 761.316 Protection of Environment... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this...
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang
2017-01-01
This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.
NASA Astrophysics Data System (ADS)
Toulorge, T.; Desmet, W.
2012-02-01
We study the performance of methods of lines combining discontinuous Galerkin spatial discretizations and explicit Runge-Kutta time integrators, with the aim of deriving optimal Runge-Kutta schemes for wave propagation applications. We review relevant Runge-Kutta methods from literature, and consider schemes of order q from 3 to 4, and number of stages up to q + 4, for optimization. From a user point of view, the problem of the computational efficiency involves the choice of the best combination of mesh and numerical method; two scenarios are defined. In the first one, the element size is totally free, and a 8-stage, fourth-order Runge-Kutta scheme is found to minimize a cost measure depending on both accuracy and stability. In the second one, the elements are assumed to be constrained to such a small size by geometrical features of the computational domain, that accuracy is disregarded. We then derive one 7-stage, third-order scheme and one 8-stage, fourth-order scheme that maximize the stability limit. The performance of the three new schemes is thoroughly analyzed, and the benefits are illustrated with two examples. For each of these Runge-Kutta methods, we provide the coefficients for a 2 N-storage implementation, along with the information needed by the user to employ them optimally.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Chen, Xia; Li, Biyuan; Xu, Wenjun; Lei, Zhenkun
2017-01-01
We propose an image encryption scheme using chaotic phase masks and cascaded Fresnel transform holography based on a constrained optimization algorithm. In the proposed encryption scheme, the chaotic phase masks are generated by Henon map, and the initial conditions and parameters of Henon map serve as the main secret keys during the encryption and decryption process. With the help of multiple chaotic phase masks, the original image can be encrypted into the form of a hologram. The constrained optimization algorithm makes it possible to retrieve the original image from only single frame hologram. The use of chaotic phase masks makes the key management and transmission become very convenient. In addition, the geometric parameters of optical system serve as the additional keys, which can improve the security level of the proposed scheme. Comprehensive security analysis performed on the proposed encryption scheme demonstrates that the scheme has high resistance against various potential attacks. Moreover, the proposed encryption scheme can be used to encrypt video information. And simulations performed on a video in AVI format have also verified the feasibility of the scheme for video encryption.
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann
2013-04-01
In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and
Optimizing the monitoring scheme for groundwater quality in the Lusatian mining region
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Hildmann, Christian; Haubold-Rosar, Michael
2014-05-01
Opencast lignite mining always requires the lowering of the groundwater table. In Lusatia, strong mining activities during the GDR era were associated with low groundwater levels in huge parts of the region. Pyrite (iron sulfide) oxidation in the aerated sediments is the cause for a continuous regional groundwater pollution with sulfates, acids, iron and other metals. The contaminated groundwater poses danger to surface water bodies and may also affect soil quality. Due to the decline of mining activities after the German reunification, groundwater levels have begun to recover towards the pre-mining stage, which aggravates the environmental risks. Given the relevance of the problem and the need for effective remediation measures, it is mandatory to know the temporal and spatial distribution of potential pollutants. The reliability of these space-time models, in turn, relies on a well-designed groundwater monitoring scheme. So far, the groundwater monitoring network in the Lusatian mining region represents a purposive sample in space and time with great variations in the density of monitoring wells. Moreover, groundwater quality in some of the areas that face pronounced increases in groundwater levels is currently not monitored at all. We therefore aim to optimize the monitoring network based on the existing information, taking into account practical aspects such as the land-use dependent need for remedial action. This contribution will discuss the usefulness of approaches for optimizing spatio-temporal mapping with regard to groundwater pollution by iron and aluminum in the Lusatian mining region.
A numerical scheme for optimal transition paths of stochastic chemical kinetic systems
Liu Di
2008-10-01
We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.
Sample size and optimal sample design in tuberculosis surveys
Sánchez-Crespo, J. L.
1967-01-01
Tuberculosis surveys sponsored by the World Health Organization have been carried out in different communities during the last few years. Apart from the main epidemiological findings, these surveys have provided basic statistical data for use in the planning of future investigations. In this paper an attempt is made to determine the sample size desirable in future surveys that include one of the following examinations: tuberculin test, direct microscopy, and X-ray examination. The optimum cluster sizes are found to be 100-150 children under 5 years of age in the tuberculin test, at least 200 eligible persons in the examination for excretors of tubercle bacilli (direct microscopy) and at least 500 eligible persons in the examination for persons with radiological evidence of pulmonary tuberculosis (X-ray). Modifications of the optimum sample size in combined surveys are discussed. PMID:5300008
Optimal Sampling Strategies for Oceanic Applications
2009-01-01
Bluelink ocean data assimilation system ( BODAS ; Oke et al. 2005; 2008) that underpins BRAN is based on Ensemble Optimal Interpolation (EnOI). EnOI is well...Brassington, D. A. Griffin and A. Schiller, 2008: The Bluelink Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. Oke, P. R., M...1017. [published, refereed] Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. [published, refereed] Sakov, P., and P. R. Oke 2008
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Optimized finite-difference (DRP) schemes perform poorly for decaying or growing oscillations
NASA Astrophysics Data System (ADS)
Brambley, E. J.
2016-11-01
Computational aeroacoustics often use finite difference schemes optimized to require relatively few points per wavelength; such optimized schemes are often called Dispersion Relation Preserving (DRP). Similar techniques are also used outside aeroacoustics. Here the question is posed: what is the equivalent of points per wavelength for growing or decaying waves, and how well are such waves resolved numerically? Such non-constant-amplitude waves are common in aeroacoustics, such as the exponential decay caused by acoustic linings, the O (1 / r) decay of an expanding spherical wave, and the decay of high-azimuthal-order modes in the radial direction towards the centre of a cylindrical duct. It is shown that optimized spatial derivatives perform poorly for waves that are not of constant amplitude, under performing maximal-order schemes. An equivalent criterion to points per wavelength is proposed for non-constant-amplitude oscillations, reducing to the standard definition for constant-amplitude oscillations and valid even for pure growth or decay with no oscillation. Using this definition, coherent statements about points per wavelength necessary for a given accuracy can be made for maximal-order schemes applied to non-constant-amplitude oscillations. These features are illustrated through a numerical example of a one-dimensional wave propagating through a damping region.
Tandem polymer solar cells: simulation and optimization through a multiscale scheme
Wei, Fanan; Yao, Ligang; Lan, Fei
2017-01-01
In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects – optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules – were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor–acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme. PMID:28144571
Tandem polymer solar cells: simulation and optimization through a multiscale scheme.
Wei, Fanan; Yao, Ligang; Lan, Fei; Li, Guangyong; Liu, Lianqing
2017-01-01
In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects - optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules - were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor-acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme.
High-order sampling schemes for path integrals and Gaussian chain simulations of polymers
Müser, Martin H.; Müller, Marcus
2015-05-07
In this work, we demonstrate that path-integral schemes, derived in the context of many-body quantum systems, benefit the simulation of Gaussian chains representing polymers. Specifically, we show how to decrease discretization corrections with little extra computation from the usual O(1/P{sup 2}) to O(1/P{sup 4}), where P is the number of beads representing the chains. As a consequence, high-order integrators necessitate much smaller P than those commonly used. Particular emphasis is placed on the questions of how to maintain this rate of convergence for open polymers and for polymers confined by a hard wall as well as how to ensure efficient sampling. The advantages of the high-order sampling schemes are illustrated by studying the surface tension of a polymer melt and the interface tension in a binary homopolymers blend.
K-Optimal Gradient Encoding Scheme for Fourth-Order Tensor-Based Diffusion Profile Imaging.
Alipoor, Mohammad; Gu, Irene Yu-Hua; Mehnert, Andrew; Maier, Stephan E; Starck, Göran
2015-01-01
The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation.
K-Optimal Gradient Encoding Scheme for Fourth-Order Tensor-Based Diffusion Profile Imaging
Alipoor, Mohammad; Gu, Irene Yu-Hua; Mehnert, Andrew; Maier, Stephan E.; Starck, Göran
2015-01-01
The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation. PMID:26451376
Three-dimensional acoustic wave equation modeling based on the optimal finite-difference scheme
NASA Astrophysics Data System (ADS)
Cai, Xiao-Hui; Liu, Yang; Ren, Zhi-Ming; Wang, Jian-Min; Chen, Zhi-De; Chen, Ke-Yang; Wang, Cheng
2015-09-01
Generally, FD coefficients can be obtained by using Taylor series expansion (TE) or optimization methods to minimize the dispersion error. However, the TE-based FD method only achieves high modeling precision over a limited range of wavenumbers, and produces large numerical dispersion beyond this range. The optimal FD scheme based on least squares (LS) can guarantee high precision over a larger range of wavenumbers and obtain the best optimization solution at small computational cost. We extend the LS-based optimal FD scheme from two-dimensional (2D) forward modeling to three-dimensional (3D) and develop a 3D acoustic optimal FD method with high efficiency, wide range of high accuracy and adaptability to parallel computing. Dispersion analysis and forward modeling demonstrate that the developed FD method suppresses numerical dispersion. Finally, we use the developed FD method to source wavefield extrapolation and receiver wavefield extrapolation in 3D RTM. To decrease the computation time and storage requirements, the 3D RTM is implemented by combining the efficient boundary storage with checkpointing strategies on GPU. 3D RTM imaging results suggest that the 3D optimal FD method has higher precision than conventional methods.
Efficient low-storage Runge-Kutta schemes with optimized stability regions
NASA Astrophysics Data System (ADS)
Niegemann, Jens; Diehl, Richard; Busch, Kurt
2012-01-01
A variety of numerical calculations, especially when considering wave propagation, are based on the method-of-lines, where time-dependent partial differential equations (PDEs) are first discretized in space. For the remaining time-integration, low-storage Runge-Kutta schemes are particularly popular due to their efficiency and their reduced memory requirements. In this work, we present a numerical approach to generate new low-storage Runge-Kutta (LSRK) schemes with optimized stability regions for advection-dominated problems. Adapted to the spectral shape of a given physical problem, those methods are found to yield significant performance improvements over previously known LSRK schemes. As a concrete example, we present time-domain calculations of Maxwell's equations in fully three-dimensional systems, discretized by a discontinuous Galerkin approach.
Comparison of rainfall sampling schemes using a calibrated stochastic rainfall generator
Welles, E.
1994-12-31
Accurate rainfall measurements are critical to river flow predictions. Areal and gauge rainfall measurements create different descriptions of the same storms. The purpose of this study is to characterize those differences. A stochastic rainfall generator was calibrated using an automatic search algorithm. Statistics describing several rainfall characteristics of interest were used in the error function. The calibrated model was then used to generate storms which were exhaustively sampled, sparsely sampled and sampled areally with 4 x 4 km grids. The sparsely sampled rainfall was also kriged to 4 x 4 km blocks. The differences between the four schemes were characterized by comparing statistics computed from each of the sampling methods. The possibility of predicting areal statistics from gauge statistics was explored. It was found that areally measured storms appeared to move more slowly, appeared larger, appeared less intense and have shallower intensity gradients.
Vonderheide, Anne P; Kauffman, Peter E; Hieber, Thomas E; Brisbin, Judith A; Melnyk, Lisa Jo; Morgan, Jeffrey N
2009-03-25
Analysis of an individual's total daily food intake may be used to determine aggregate dietary ingestion of given compounds. However, the resulting composite sample represents a complex mixture, and measurement of such can often prove to be difficult. In this work, an analytical scheme was developed for the determination of 12 select pyrethroid pesticides in dietary samples. In the first phase of the study, several cleanup steps were investigated for their effectiveness in removing interferences in samples with a range of fat content (1-10%). Food samples were homogenized in the laboratory, and preparatory techniques were evaluated through recoveries from fortified samples. The selected final procedure consisted of a lyophilization step prior to sample extraction. A sequential 2-fold cleanup procedure of the extract included diatomaceous earth for removal of lipid components followed with a combination of deactivated alumina and C(18) for the simultaneous removal of polar and nonpolar interferences. Recoveries from fortified composite diet samples (10 microg kg(-1)) ranged from 50.2 to 147%. In the second phase of this work, three instrumental techniques [gas chromatography-microelectron capture detection (GC-microECD), GC-quadrupole mass spectrometry (GC-quadrupole-MS), and GC-ion trap-MS/MS] were compared for greatest sensitivity. GC-quadrupole-MS operated in selective ion monitoring (SIM) mode proved to be most sensitive, yielding method detection limits of approximately 1 microg kg(-1). The developed extraction/instrumental scheme was applied to samples collected in an exposure measurement field study. The samples were fortified and analyte recoveries were acceptable (75.9-125%); however, compounds coextracted from the food matrix prevented quantitation of four of the pyrethroid analytes in two of the samples considered.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Tank waste remediation system optimized processing strategy with an altered treatment scheme
Slaathaug, E.J.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility.
The optimization on flow scheme of helium liquefier with genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.
2017-01-01
There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.
NASA Astrophysics Data System (ADS)
Khayyer, Abbas; Gotoh, Hitoshi; Shimizu, Yuma
2017-03-01
The paper provides a comparative investigation on accuracy and conservation properties of two particle regularization schemes, namely, the Dynamic Stabilization (DS) [1] and generalized Particle Shifting (PS) [2] schemes in simulations of both internal and free-surface flows in ISPH (Incompressible SPH) context. The paper also presents an Optimized PS (OPS) scheme for accurate and consistent implementation of particle shifting for free-surface flows. In contrast to PS, the OPS does not contain any tuning parameters for free-surface, consistently resulting in perfect elimination of shifting normal to an interface and resolves the unphysical discontinuity beneath the interface, seen in PS results.
Optimizing sparse sampling for 2D electronic spectroscopy
NASA Astrophysics Data System (ADS)
Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias
2017-02-01
We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.
Xing, Changhu; Jensen, Colby; Folsom, Charles; Ban, Heng; Marshall, Douglas W.
2014-01-01
In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivitiesof the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation.
Optimal sample size allocation for Welch's test in one-way heteroscedastic ANOVA.
Shieh, Gwowen; Jan, Show-Li
2015-06-01
The determination of an adequate sample size is a vital aspect in the planning stage of research studies. A prudent strategy should incorporate all of the critical factors and cost considerations into sample size calculations. This study concerns the allocation schemes of group sizes for Welch's test in a one-way heteroscedastic ANOVA. Optimal allocation approaches are presented for minimizing the total cost while maintaining adequate power and for maximizing power performance for a fixed cost. The commonly recommended ratio of sample sizes is proportional to the ratio of the population standard deviations or the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Detailed numerical investigations have shown that these usual allocation methods generally do not give the optimal solution. The suggested procedures are illustrated using an example of the cost-efficiency evaluation in multidisciplinary pain centers.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
McGregor, David A.
1993-07-01
The purpose of the Human Genome Project is outlined followed by a discussion of electrophoresis in slab gels and capillaries and its application to deoxyribonucleic acid (DNA). Techniques used to modify electroosmotic flow in capillaries are addressed. Several separation and detection schemes for DNA via gel and capillary electrophoresis are described. Emphasis is placed on the elucidation of DNA fragment size in real time and shortening separation times to approximate real time monitoring. The migration of DNA fragment bands through a slab gel can be monitored by UV absorption at 254 nm and imaged by a charge coupled device (CCD) camera. Background correction and immediate viewing of band positions to interactively change the field program in pulsed-field gel electrophoresis are possible throughout the separation. The use of absorption removes the need for staining or radioisotope labeling thereby simplifying sample preparation and reducing hazardous waste generation. This leaves the DNA in its native state and further analysis can be performed without de-staining. The optimization of several parameters considerably reduces total analysis time. DNA from 2 kb to 850 kb can be separated in 3 hours on a 7 cm gel with interactive control of the pulse time, which is 10 times faster than the use of a constant field program. The separation of ΦX174RF DNA-HaeIII fragments is studied in a 0.5% methyl cellulose polymer solution as a function of temperature and applied voltage. The migration times decreased with both increasing temperature and increasing field strength, as expected. The relative migration rates of the fragments do not change with temperature but are affected by the applied field. Conditions were established for the separation of the 271/281 bp fragments, even without the addition of intercalating agents. At 700 V/cm and 20°C, all fragments are separated in less than 4 minutes with an average plate number of 2.5 million per meter.
A genetic algorithm based multi-objective shape optimization scheme for cementless femoral implant.
Chanda, Souptick; Gupta, Sanjay; Kumar Pratihar, Dilip
2015-03-01
The shape and geometry of femoral implant influence implant-induced periprosthetic bone resorption and implant-bone interface stresses, which are potential causes of aseptic loosening in cementless total hip arthroplasty (THA). Development of a shape optimization scheme is necessary to achieve a trade-off between these two conflicting objectives. The objective of this study was to develop a novel multi-objective custom-based shape optimization scheme for cementless femoral implant by integrating finite element (FE) analysis and a multi-objective genetic algorithm (GA). The FE model of a proximal femur was based on a subject-specific CT-scan dataset. Eighteen parameters describing the nature of four key sections of the implant were identified as design variables. Two objective functions, one based on implant-bone interface failure criterion, and the other based on resorbed proximal bone mass fraction (BMF), were formulated. The results predicted by the two objective functions were found to be contradictory; a reduction in the proximal bone resorption was accompanied by a greater chance of interface failure. The resorbed proximal BMF was found to be between 23% and 27% for the trade-off geometries as compared to ∼39% for a generic implant. Moreover, the overall chances of interface failure have been minimized for the optimal designs, compared to the generic implant. The adaptive bone remodeling was also found to be minimal for the optimally designed implants and, further with remodeling, the chances of interface debonding increased only marginally.
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Fault isolation filter for networked control system with event-triggered sampling scheme.
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method.
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Optimal integral sliding mode control scheme based on pseudospectral method for robotic manipulators
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Li, Shihua
2014-06-01
For a multi-input multi-output nonlinear system, an optimal integral sliding mode control scheme based on pseudospectral method is proposed in this paper. And the controller is applied on rigid robotic manipulators with constraints. First, a general form of integral sliding mode is designed with the aim of restraining disturbance. Then, pseudospectral method is adopted to deal with constrained optimal control problem. In consideration of the benefits of both methods, an optimal integral sliding mode controller is given, which is based on the combination of integral sliding mode and pseudospectral method. The stability analysis shows that the controller can guarantee stability of robotic manipulator system. Simulations show the effectiveness of proposed method.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-07-30
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Optimal sample sizes for Welch's test under various allocation and cost considerations.
Jan, Show-Li; Shieh, Gwowen
2011-12-01
The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency
NASA Astrophysics Data System (ADS)
Subramanian, Nithya
the laminate stiffness matrix implements a square fiber model with a fiber volume fraction sample. The calculations to establish the expected values of constraints and fitness values use the Classical Laminate Theory. The non-deterministic constraints enforced include the probability of satisfying the Tsai-Hill failure criterion and the maximum strain limit. The results from a deterministic optimization, optimization under uncertainty using Monte Carlo sampling and Population-Based Sampling are studied. Also, the work investigates the effectiveness of running the fitness analyses in parallel and the sampling scheme in parallel. Overall, the work conducted for this thesis demonstrated the efficacy of the GA with Population-Based Sampling for the focus problem and established improvements over previous implementations of the GA with PBS.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Optimization technology of 9/7 wavelet lifting scheme on DSP*
NASA Astrophysics Data System (ADS)
Chen, Zhengzhang; Yang, Xiaoyuan; Yang, Rui
2007-12-01
Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing, especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote sensing image compression system in future.
Optimal Feedback Scheme and Universal Time Scaling for Hamiltonian Parameter Estimation
NASA Astrophysics Data System (ADS)
Yuan, Haidong; Fung, Chi-Hang Fred
2015-09-01
Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.
Optimized method for dissolved hydrogen sampling in groundwater.
Alter, Marcus D; Steiof, Martin
2005-06-01
Dissolved hydrogen concentrations are used to characterize redox conditions of contaminated aquifers. The currently accepted and recommended bubble strip method for hydrogen sampling (Wiedemeier et al., 1998) requires relatively long sampling times and immediate field analysis. In this study we present methods for optimized sampling and for sample storage. The bubble strip sampling method was examined for various flow rates, bubble sizes (headspace volume in the sampling bulb) and two different H2 concentrations. The results were compared to a theoretical equilibration model. Turbulent flow in the sampling bulb was optimized for gas transfer by reducing the inlet diameter. Extraction with a 5 mL headspace volume and flow rates higher than 100 mL/min resulted in 95-100% equilibrium within 10-15 min. In order to investigate the storage of samples from the gas sampling bulb gas samples were kept in headspace vials for varying periods. Hydrogen samples (4.5 ppmv, corresponding to 3.5 nM in liquid phase) could be stored up to 48 h and 72 h with a recovery rate of 100.1+/-2.6% and 94.6+/-3.2%, respectively. These results are promising and prove the possibility of storage for 2-3 days before laboratory analysis. The optimized method was tested at a field site contaminated with chlorinated solvents. Duplicate gas samples were stored in headspace vials and analyzed after 24 h. Concentrations were measured in the range of 2.5-8.0 nM corresponding to known concentrations in reduced aquifers.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Menezes, Angela; Woods, Kate; Chanthongthip, Anisone; Dittrich, Sabine; Opoku-Boateng, Agatha; Kimuli, Maimuna; Chalker, Victoria
2016-01-01
Background Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST) data direct from patient specimens while minimising costs for subsequent sequencing. Methodology and Findings An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen types from patients diagnosed with leptospirosis between 2014 and 2015 in the United Kingdom (UK) and the Lao Peoples Democratic Republic (Lao PDR). Of 44 clinical samples (23 serum, 6 whole blood, 3 buffy coat, 12 urine) PCR positive for pathogenic Leptospira spp. at least one allele was amplified in 22 samples (50%) and used for phylogenetic inference. Full allelic profiles were obtained from ten specimens, representing all sample types (23%). No nonspecific amplicons were observed in any of the samples. Of twelve PCR positive urine specimens three gave full allelic profiles (25%) and two a partial profile. Phylogenetic analysis allowed for species assignment. The predominant species detected was L. interrogans (10/14 and 7/8 from UK and Lao PDR, respectively). All other species were detected in samples from only one country (Lao PDR: L. borgpetersenii [1/8]; UK: L. kirschneri [1/14], L. santarosai [1/14], L. weilii [2/14]). Conclusion Typing information of pathogenic Leptospira spp. was obtained directly from a variety of clinical samples using a modified MLST assay. This assay negates the need for time-consuming culture of Leptospira prior to typing and will be of use both in surveillance, as single alleles enable species determination, and outbreaks for the rapid identification of clusters. PMID:27654037
White, S L; Smith, W C; Fisher, L F; Gatlin, C L; Hanasono, G K; Jordan, W H
1998-01-01
Proton pump inhibitors and H2-receptor antagonists suppress gastric acid secretion and secondarily induce hypergastrinemia. Sustained hypergastrinemia has a trophic effect on stomach fundic mucosa, including enterochromaffin-like (ECL) cell hypertrophy and hyperplasia. Histomorphometric quantitation of the pharmacologic gastric effects was conducted on 10 male and 10 female rats treated orally with LY307640 sodium, a proton pump inhibitor, at daily doses of 25, 60, 130, or 300 mg/kg for 3 mo. Histologic sections of glandular stomach, stained for chromogranin A, were evaluated by image analysis to determine stomach mucosal thickness, mucosal and nonmucosal (submucosa and muscularis) area, gastric glandular area, ECL cell number/area and cross-sectional area. Total mucosal and nonmucosal tissue volumes per animal were derived from glandular stomach volumetric and area data. Daily oral doses of compound LY307640 sodium caused slight to moderate dose-related mucosal hypertrophy and ECL cell hypertrophy and hyperplasia in all treatment groups as compared with controls. All observed effects were prominent in both sexes but were generally greater in females. The morphometric sampling schemes were explored to optimize the data collection efficiency for future studies. A comparison between the sampling schemes used in this study and alternative schemes was conducted by estimating the probability of detecting a specific percentage of change between the male control and high-dose groups based on Tukey's trend test. The sampling scheme analysis indicated that mucosal thickness and mass had been oversampled. ECL cell density quantitation efficiency would have been increased by sampling the basal mucosa only for short-term studies. The ECL cell size sampling scheme was deemed appropriate for this type of study.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.
Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V
2016-05-03
Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g
The dependence of optimal fractionation schemes on the spatial dose distribution
NASA Astrophysics Data System (ADS)
Unkelbach, Jan; Craft, David; Salari, Ehsan; Ramakrishnan, Jagdish; Bortfeld, Thomas
2013-01-01
We consider the fractionation problem in radiation therapy. Tumor sites in which the dose-limiting organ at risk (OAR) receives a substantially lower dose than the tumor, bear potential for hypofractionation even if the α/β-ratio of the tumor is larger than the α/β-ratio of the OAR. In this work, we analyze the interdependence of the optimal fractionation scheme and the spatial dose distribution in the OAR. In particular, we derive a criterion under which a hypofractionation regimen is indicated for both a parallel and a serial OAR. The approach is based on the concept of the biologically effective dose (BED). For a hypothetical homogeneously irradiated OAR, it has been shown that hypofractionation is suggested by the BED model if the α/β-ratio of the OAR is larger than α/β-ratio of the tumor times the sparing factor, i.e. the ratio of the dose received by the tumor and the OAR. In this work, we generalize this result to inhomogeneous dose distributions in the OAR. For a parallel OAR, we determine the optimal fractionation scheme by minimizing the integral BED in the OAR for a fixed BED in the tumor. For a serial structure, we minimize the maximum BED in the OAR. This leads to analytical expressions for an effective sparing factor for the OAR, which provides a criterion for hypofractionation. The implications of the model are discussed for lung tumor treatments. It is shown that the model supports hypofractionation for small tumors treated with rotation therapy, i.e. highly conformal techniques where a large volume of lung tissue is exposed to low but nonzero dose. For larger tumors, the model suggests hyperfractionation. We further discuss several non-intuitive interdependencies between optimal fractionation and the spatial dose distribution. For instance, lowering the dose in the lung via proton therapy does not necessarily provide a biological rationale for hypofractionation.
163 years of refinement: the British Geological Survey sample registration scheme
NASA Astrophysics Data System (ADS)
Howe, M. P.
2011-12-01
The British Geological Survey manages the largest UK geoscience samples collection, including: - 15,000 onshore boreholes, including over 250 km of drillcore - Vibrocores, gravity cores and grab samples from over 32,000 UK marine sample stations. 640 boreholes - Over 3 million UK fossils, including a "type and stratigraphic" reference collection of 250,000 fossils, 30,000 of which are "type, figured or cited" - Comprehensive microfossil collection, including many borehole samples - 290km of drillcore and 4.5 million cuttings samples from over 8000 UK continental shelf hydrocarbon wells - Over one million mineralogical and petrological samples, including 200,00 thin sections The current registration scheme was introduced in 1848 and is similar to that used by Charles Darwin on the Beagle. Every Survey collector or geologist has been issue with a unique prefix code of one or more letters and these were handwritten on preprinted numbers, arranged in books of 1 - 5,000 and 5,001 to 10,000. Similar labels are now computer printed. Other prefix codes are used for corporate collections, such as borehole samples, thin sections, microfossils, macrofossil sections, museum reference fossils, display quality rock samples and fossil casts. Such numbers infer significant immediate information to the curator, without the need to consult detailed registers. The registration numbers have been recorded in a series of over 1,000 registers, complete with metadata including sample ID, locality, horizon, collector and date. Citations are added as appropriate. Parent-child relationships are noted when re-registering subsubsamples. For example, a borehole sample BDA1001 could have been subsampled for a petrological thin section and off-cut (E14159), a fossil thin section (PF365), micropalynological slides (MPA273), one of which included a new holotype (MPK111), and a figured macrofossil (GSE1314). All main corporate collection now have publically-available online databases, such as Palaeo
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Osai, L.N.
1983-03-01
The Kolo Creek field is a 5 x 10 km size, faulted, rollover structure with the E2.0 reservoir as the main oil-bearing sand. The reservoir is a 200-ft thick, complex, deltaic sandstone package with a 1.9-tcf size gas cap underlain by a 200-ft thick oil rim containing ca 440 x 106 bbl STOIIP. The sand is penetrated by 34 wells, 25 of which are completed as producers. To date a 16% drop in pressure has occurred. A reservoir engineering study, based on the early pressure decline, led to the implementation of water injection scheme. Immediately prior to the initial phase of the scheme, cores were taken in 2 wells. These cores, side wall samples from other wells, and the detailed correlation made possible by a denser well pattern have resulted in a realistic geologic model. This model will influence the optimal location of future injection and production wells based on the structural and sedimentologic characteristics of the reservoir
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to
Singal, Ashok K.
2014-07-01
We examine the consistency of the unified scheme of Fanaroff-Riley type II radio galaxies and quasars with their observed number and size distributions in the 3CRR sample. We separate the low-excitation galaxies from the high-excitation ones, as the former might not harbor a quasar within and thus may not be partaking in the unified scheme models. In the updated 3CRR sample, at low redshifts (z < 0.5), the relative number and luminosity distributions of high-excitation galaxies and quasars roughly match the expectations from the orientation-based unified scheme model. However, a foreshortening in the observed sizes of quasars, which is a must in the orientation-based model, is not seen with respect to radio galaxies even when the low-excitation galaxies are excluded. This dashes the hope that the unified scheme might still work if one includes only the high-excitation galaxies.
Relevance of sampling schemes in light of Ruelle's linear response theory
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Kuna, Tobias; Wouters, Jeroen; Faranda, Davide
2012-05-01
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space-time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
2014-11-01
content (ie: low- pass response) 1) compare damping character of Artificial Dissipation and Filtering 2) formulate filter as an equivalent...Artificial Dissipation scheme - consequence of filter damping for stiff problems 3) insight on achieving “ideal” low- pass response for general...require very high order for low- pass response – overly dissipative for small time-steps • Implicit filters can be efficiently designed for low- pass
Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.
Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly
2015-09-01
Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.
NASA Astrophysics Data System (ADS)
Liu, Feng; Beck, Barbara L.; Fitzsimmons, Jeffrey R.; Blackband, Stephen J.; Crozier, Stuart
2005-11-01
In this paper, numerical simulations are used in an attempt to find optimal source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Liu, Feng; Beck, Barbara L; Fitzsimmons, Jeffrey R; Blackband, Stephen J; Crozier, Stuart
2005-11-21
In this paper, numerical simulations are used in an attempt to find optimal source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B(1)-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem.
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst-only to start it all over again-may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.
Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem
NASA Astrophysics Data System (ADS)
Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael
2015-12-01
We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst—only to start it all over again—may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.
Alessandri, Angelo; Gaggero, Mauro; Zoppoli, Riccardo
2012-06-01
Optimal control for systems described by partial differential equations is investigated by proposing a methodology to design feedback controllers in approximate form. The approximation stems from constraining the control law to take on a fixed structure, where a finite number of free parameters can be suitably chosen. The original infinite-dimensional optimization problem is then reduced to a mathematical programming one of finite dimension that consists in optimizing the parameters. The solution of such a problem is performed by using sequential quadratic programming. Linear combinations of fixed and parameterized basis functions are used as the structure for the control law, thus giving rise to two different finite-dimensional approximation schemes. The proposed paradigm is general since it allows one to treat problems with distributed and boundary controls within the same approximation framework. It can be applied to systems described by either linear or nonlinear elliptic, parabolic, and hyperbolic equations in arbitrary multidimensional domains. Simulation results obtained in two case studies show the potentials of the proposed approach as compared with dynamic programming.
Optimal allocation of point-count sampling effort
Barker, R.J.; Sauer, J.R.; Link, W.A.
1993-01-01
Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
[Optimized sample preparation for metabolome studies on Streptomyces coelicolor].
Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian
2014-04-01
Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to
NASA Astrophysics Data System (ADS)
Schwientek, Marc; Guillet, Gaelle; Kuch, Bertram; Rügner, Hermann; Grathwohl, Peter
2014-05-01
Xenobiotic contaminants such as pharmaceuticals or personal care products typically are continuously introduced into the receiving water bodies via wastewater treatment plant (WWTP) outfalls and, episodically, via combined sewer overflows in the case of precipitation events. Little is known about how these chemicals behave in the environment and how they affect ecosystems and human health. Examples of traditional persistent organic pollutants reveal, that they may still be present in the environment even decades after they have been released. In this study a sampling strategy was developed which gives valuable insights into the environmental behaviour of xenobiotic chemicals. The method is based on the Lagrangian sampling scheme by which a parcel of water is sampled repeatedly as it moves downstream while chemical, physical, and hydrologic processes altering the characteristics of the water mass can be investigated. The Steinlach is a tributary of the River Neckar in Southwest Germany with a catchment area of 140 km². It receives the effluents of a WWTP with 99,000 inhabitant equivalents 4 km upstream of its mouth. The varying flow rate of effluents induces temporal patterns of electrical conductivity in the river water which enable to track parcels of water along the subsequent urban river section. These parcels of water were sampled a) close to the outlet of the WWTP and b) 4 km downstream at the confluence with the Neckar. Sampling was repeated at a 15 min interval over a complete diurnal cycle and 2 h composite samples were prepared. A model-based analysis demonstrated, on the one hand, that substances behaved reactively to a varying extend along the studied river section. On the other hand, it revealed that the observed degradation rates are likely dependent on the time of day. Some chemicals were degraded mainly during daytime (e.g. the disinfectant Triclosan or the phosphorous flame retardant TDCP), others as well during nighttime (e.g. the musk fragrance
Layered HEVC/H.265 video transmission scheme based on hierarchical QAM optimization
NASA Astrophysics Data System (ADS)
Feng, Weidong; Zhou, Cheng; Xiong, Chengyi; Chen, Shaobo; Wang, Junxi
2015-12-01
High Efficiency Video Coding (HEVC) is the state-of-art video compression standard which fully support scalability features and is able to generate layered video streams with unequal importance. Unfortunately, when the base layer (BL) which is more importance to the stream is lost during the transmission, the enhancement layer (EL) based on the base layer must be discarded by receiver. Obviously, using the same transmittal strategies for BL and EL is unreasonable. This paper proposed an unequal error protection (UEP) system using different hierarchical amplitude modulation (HQAM). The BL data with high priority are mapped into the most reliable HQAM mode and the EL data with low priority are mapped into HQAM mode with fast transmission efficiency. Simulations on scalable HEVC codec show that the proposed optimized video transmission system is more attractive than the traditional equal error protection (EEP) scheme because it effectively balances the transmission efficiency and reconstruction video quality.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
NASA Astrophysics Data System (ADS)
Xue, Lulin; Pan, Zaitao
2008-05-01
Carbon exchange between the atmosphere and terrestrial ecosystem is a key component affecting climate changes. Because the in situ measurements are not dense enough to resolve CO2 exchange spatial variation on various scales, the variation has been mainly simulated by numerical ecosystem models. These models contain large uncertainties in estimating CO2 exchange owing to incorporating a number of empirical parameters on different scales. This study applied a global optimization algorithm and ensemble approach to a surface CO2 flux scheme to (1) identify sensitive photosynthetic and respirational parameters, and (2) optimize the sensitive parameters in the modeling sense and improve the model skills. The photosynthetic and respirational parameters of corn (C4 species) and soybean (C3 species) in NCAR land surface model (LSM) are calibrated against observations from AmeriFlux site at Bondville, IL during 1999 and 2000 growing seasons. Results showed that the most sensitive parameters are maximum carboxylation rate at 25°C and its temperature sensitivity parameter (Vcmax25 and avc), quantum efficiency at 25°C (Qe25), temperature sensitivity parameter for maintenance respiration (arm), and temperature sensitivity parameter for microbial respiration (amr). After adopting calibrated parameter values, simulated seasonal averaged CO2 fluxes were improved for both the C4 and the C3 crops (relative bias reduced from 0.09 to -0.02 for the C4 case and from 0.28 to -0.01 for the C3 case). An updated scheme incorporating new parameters and a revised flux-integration treatment is also proposed.
Classifier-Guided Sampling for Complex Energy System Optimization
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
Efficient infill sampling for unconstrained robust optimization problems
NASA Astrophysics Data System (ADS)
Rehman, Samee Ur; Langelaar, Matthijs
2016-08-01
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
Optimized robust plasma sampling for glomerular filtration rate studies.
Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L
2012-09-01
In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement.
Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering
Yeh, Wei-Chang; Lai, Chyh-Ming
2015-01-01
Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS) and rapid centralized strategy (RCS) in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions. PMID:26348483
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
NASA Astrophysics Data System (ADS)
Kojima, Sadaoki; Zhe, Zhang; Sawada, Hiroshi; Firex Team
2015-11-01
In Fast Ignition Inertial Confinement Fusion, optimization of relativistic electron beam (REB) accelerated by a high-intensity laser pulse is critical for the efficient core heating. The high-energy tail of the electron spectrum is generated by the laser interaction with a long-scale-length plasma and does not efficiently couple to a fuel core. In the cone-in-shell scheme, long-scale-length plasmas can be produced inside the cone by the pedestal of a high-intensity laser, radiation heating of the inner cone wall and shock wave from an implosion core. We have investigated a relation between the presence of pre-plasma inside the cone and the REB energy distribution using the Gekko XII and 2kJ-PW LFEX laser at the Institute of Laser Engineering. The condition of an inner cone wall was monitored using VISAR and SOP systems on a cone-in-shell implosion. The generation of the REB was measured with an electron energy analyzer and a hard x-ray spectrometer on a separate shot by injecting the LFEX laser in an imploded target. The result shows the strong correlation between the preheat and high-energy tail generation. Optimization of cone-wall thickness for the fast-ignition will be discussed. This work is supported by NIFS, MEXT/JSPS KAKENHI Grant and JSPS Fellows (Grant Number 14J06592).
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
NASA Astrophysics Data System (ADS)
Sorella, S.; Devaux, N.; Dagrada, M.; Mazzola, G.; Casula, M.
2015-12-01
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Test samples for optimizing STORM super-resolution microscopy.
Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E
2013-09-06
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
A General Investigation of Optimized Atmospheric Sample Duration
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Optimal sampling frequency in recording of resistance training exercises.
Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis
2017-03-01
The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.
Optimization of the combined proton acceleration regime with a target composition scheme
Yao, W. P.; Li, B. W.; Zheng, C. Y.; Liu, Z. J.; Yan, X. Q.; Qiao, B.
2016-01-15
A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CH target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 10{sup 22} W cm{sup −2}, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.
Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Continuous quality control of the blood sampling procedure using a structured observation scheme
Seemann, Tine Lindberg; Nybo, Mads
2016-01-01
Introduction An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. Materials and methods The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a follow-up study was conducted. All phlebotomists at the Department of Clinical Biochemistry and Pharmacology were observed. Three blood collections by each phlebotomist were observed at each session conducted at the phlebotomy ward and the hospital wards, respectively. Error frequencies were calculated for the phlebotomy ward and the hospital wards and for the two study phases. Results A total of 126 blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant differences were found between the two settings. After focus on the major aspects, the follow-up study showed significant improvement for all three items at both settings (P < 0.01, P < 0.01, and P = 0.01, respectively). Conclusion Continuous quality control of the phlebotomy procedure revealed a number of items not conducted in compliance with the local phlebotomy guideline. It supported significant improvements in the adherence to the recommended phlebotomy procedures and facilitated documentation of the phlebotomy quality. PMID:27812302
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Inhibition of viscous fluid fingering: A variational scheme for optimal flow rates
NASA Astrophysics Data System (ADS)
Miranda, Jose; Dias, Eduardo; Alvarez-Lacalle, Enrique; Carvalho, Marcio
2012-11-01
Conventional viscous fingering flow in radial Hele-Shaw cells employs a constant injection rate, resulting in the emergence of branched interfacial shapes. The search for mechanisms to prevent the development of these bifurcated morphologies is relevant to a number of areas in science and technology. A challenging problem is how best to choose the pumping rate in order to restrain growth of interfacial amplitudes. We use an analytical variational scheme to look for the precise functional form of such an optimal flow rate. We find it increases linearly with time in a specific manner so that interface disturbances are minimized. Experiments and nonlinear numerical simulations support the effectiveness of this particularly simple, but not at all obvious, pattern controlling process. J.A.M., E.O.D. and M.S.C. thank CNPq/Brazil for financial support. E.A.L. acknowledges support from Secretaria de Estado de IDI Spain under project FIS2011-28820-C02-01.
Gizaw, S; van Arendonk, J A M; Valle-Zárate, A; Haile, A; Rischkowsky, B; Dessie, T; Mwai, A O
2014-10-01
A simulation study was conducted to optimize a cooperative village-based sheep breeding scheme for Menz sheep of Ethiopia. Genetic gains and profits were estimated under nine levels of farmers' participation and three scenarios of controlled breeding achieved in the breeding programme, as well as under three cooperative flock sizes, ewe to ram mating ratios and durations of ram use for breeding. Under fully controlled breeding, that is, when there is no gene flow between participating (P) and non-participating (NP) flocks, profits ranged from Birr 36.9 at 90% of participation to Birr 21.3 at 10% of participation. However, genetic progress was not affected adversely. When there was gene flow from the NP to P flocks, profits declined from Birr 28.6 to Birr -3.7 as participation declined from 90 to 10%. Under the two-way gene flow model (i.e. when P and NP flocks are herded mixed in communal grazing areas), NP flocks benefited from the genetic gain achieved in the P flocks, but the benefits declined sharply when participation declined beyond 60%. Our results indicate that a cooperative breeding group can be established with as low as 600 breeding ewes mated at a ratio of 45 ewes to one ram, and the rams being used for breeding for a period of two years. This study showed that farmer cooperation is crucial to effect genetic improvement under smallholder low-input sheep farming systems.
Sampling of soil moisture fields and related errors: implications to the optimal sampling design
NASA Astrophysics Data System (ADS)
Yoo, Chulsang
Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985-992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35-45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.
Optimization of Evans blue quantitation in limited rat tissue samples
NASA Astrophysics Data System (ADS)
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-01
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
NSECT sinogram sampling optimization by normalized mutual information
NASA Astrophysics Data System (ADS)
Viana, Rodrigo S.; Galarreta-Valverde, Miguel A.; Mekkaoui, Choukri; Yoriyaz, Hélio; Jackowski, Marcel P.
2015-03-01
Neutron Stimulated Emission Computed Tomography (NSECT) is an emerging noninvasive imaging technique that measures the distribution of isotopes from biological tissue using fast-neutron inelastic scattering reaction. As a high-energy neutron beam illuminates the sample, the excited nuclei emit gamma rays whose energies are unique to the emitting nuclei. Tomographic images of each element in the spectrum can then be reconstructed to represent the spatial distribution of elements within the sample using a first generation tomographic scan. NSECT's high radiation dose deposition, however, requires a sampling strategy that can yield maximum image quality under a reasonable radiation dose. In this work, we introduce an NSECT sinogram sampling technique based on the Normalized Mutual Information (NMI) of the reconstructed images. By applying the Radon Transform on the ground-truth image obtained from a carbon-based synthetic phantom, different NSECT sinogram configurations were simulated and compared by using the NMI as a similarity measure. The proposed methodology was also applied on NSECT images acquired using MCNP5 Monte Carlo simulations of the same phantom to validate our strategy. Results show that NMI can be used to robustly predict the quality of the reconstructed NSECT images, leading to an optimal NSECT acquisition and a minimal absorbed dose by the patient.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.
Optimal probes for withdrawal of uncontaminated fluid samples
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2005-08-01
Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa
Jiang, Hai-ming; Xie, Kang; Wang, Ya-fei
2010-05-24
An effective pump scheme for the design of broadband and flat gain spectrum Raman fiber amplifiers is proposed. This novel approach uses a new shooting algorithm based on a modified Newton-Raphson method and a contraction factor to solve the two point boundary problems of Raman coupled equations more stably and efficiently. In combination with an improved particle swarm optimization method, which improves the efficiency and convergence rate by introducing a new parameter called velocity acceptability probability, this scheme optimizes the wavelengths and power levels for the pumps quickly and accurately. Several broadband Raman fiber amplifiers in C+L band with optimized pump parameters are designed. An amplifier of 4 pumps is designed to deliver an average on-off gain of 13.3 dB for a bandwidth of 80 nm, with about +/-0.5 dB in band maximum gain ripples.
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Gegg, Katharina; Becht, Michael
2013-04-01
Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size
Improved scheme for Cross-track Infrared Sounder geolocation assessment and optimization
NASA Astrophysics Data System (ADS)
Wang, Likun; Zhang, Bin; Tremblay, Denis; Han, Yong
2017-01-01
An improved scheme for Cross-track Infrared Sounder (CrIS) geolocation assessment for all scan angles (from -48.5° to 48.5°) is developed in this study. The method uses spatially collocated radiance measurements from the Visible Infrared Imaging Radiometer Suite (VIIRS) image band I5 to evaluate the geolocation performance of the CrIS Sensor Data Records (SDR) by taking advantage of its high spatial resolution (375 m at nadir) and accurate geolocation. The basic idea is to perturb CrIS line-of-sight vectors along the in-track and cross-track directions to find a position where CrIS and VIIRS data matches more closely. The perturbation angles at this best matched position are then used to evaluate the CrIS geolocation accuracy. More importantly, the new method is capable of performing postlaunch on-orbit geometric calibration by optimizing mapping angle parameters based on the assessment results and thus can be further extended to the following CrIS sensors on new satellites. Finally, the proposed method is employed to evaluate the CrIS geolocation accuracy on current Suomi National Polar-orbiting Partnership satellite. The error characteristics are revealed along the scan positions in the in-track and cross-track directions. It is found that there are relatively large errors ( 4 km) in the cross-track direction close to the end of scan positions. With newly updated mapping angles, the geolocation accuracy is greatly improved for all scan positions (less than 0.3 km). This makes CrIS and VIIRS spatially align together and thus benefits the application that needs combination of CrIS and VIIRS measurements and products.
NASA Astrophysics Data System (ADS)
Kristoffersen, Anders; Goa, Pål Erik
2011-09-01
The physiological noise in 3D image acquisition is shown to depend strongly on the sampling scheme. Five sampling schemes are considered: Linear, Centric, Segmented, Random and Tuned. Tuned acquisition means that data acquisition at k-space positions k and - k are separated with a specific time interval. We model physiological noise as a periodic temporal oscillation with arbitrary spatial amplitude in the physical object and develop a general framework to describe how this is rendered in the reconstructed image. Reconstructed noise can be decomposed in one component that is in phase with the signal (parallel) and one that is 90° out of phase (orthogonal). Only the former has a significant influence on the magnitude of the signal. The study focuses on fMRI using 3D EPI. Each k-space plane is acquired in a single shot in a time much shorter than the period of the physiological noise. The above mentioned sampling schemes are applied in the slow k-space direction and noise propagates almost exclusively in this direction. The problem then, is effectively one-dimensional. Numerical simulations and analytical expressions are presented. 3D noise measurements and 2D measurements with high temporal resolution are conducted. The measurements are performed under breath-hold to isolate the effect of cardiac-induced pulsatile motion. We compare the time-course stability of the sampling schemes and the extent to which noise propagates from a localized source into other parts of the imaging volume. Tuned and Linear acquisitions perform better than Centric, Segmented and Random.
NASA Astrophysics Data System (ADS)
Qiu, Yuzhuo
2013-04-01
The optimal weighting scheme and the role of coupling strength against load failures on symmetrically and asymmetrically coupled interdependent networks were investigated. The degree-based weighting scheme was extended to interdependent networks, with the flow dynamics dominated by global redistribution based on weighted betweenness centrality. Through contingency analysis of one-node removal, we demonstrated that there still exists an optimal weighting parameter on interdependent networks, but it might shift as compared to the case in isolated networks because of the break of symmetry. And it will be easier for the symmetrically and asymmetrically coupled interdependent networks to achieve robustness and better cost configuration against the one-node-removal-induced cascade of load failures when coupling strength was weaker. Our findings might have great generality for characterizing load-failure-induced cascading dynamics in real-world degree-based weighted interdependent networks.
Optimization for Peptide Sample Preparation for Urine Peptidomics
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.
2014-02-25
when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.
NASA Astrophysics Data System (ADS)
Jeong, Younkoo; Jayanth, G. R.; Menq, Chia-Hsiang
2007-09-01
The control of tip-to-sample distance in atomic force microscopy (AFM) is achieved through controlling the vertical tip position of the AFM cantilever. In the vertical tip-position control, the required z motion is commanded by laser reading of the vertical tip position in real time and might contain high frequency components depending on the lateral scanning rate and topographical variations of the sample. This paper presents a dual-actuator tip-motion control scheme that enables the AFM tip to track abrupt topographical variations. In the dual-actuator scheme, an additional magnetic mode actuator is employed to achieve high bandwidth tip-motion control while the regular z scanner provides the necessary motion range. This added actuator serves to make the entire cantilever bandwidth available for tip positioning, and thus controls the tip-to-sample distance. A fast programmable electronics board was employed to realize the proposed dual-actuator control scheme, in which model cancellation algorithms were implemented to enlarge the bandwidth of the magnetic actuation and to compensate the lightly damped dynamics of the cantilever. Experiments were conducted to illustrate the capabilities of the proposed dual-actuator tip-motion control in terms of response speed and travel range. It was shown that while the bandwidth of the regular z scanner was merely a small fraction of the cantilever's bandwidth, the dual-actuator control scheme led to a tip-motion control system, the bandwidth of which was comparable to that of the cantilever, where the dynamics overdamped, and the motion range comparable to that of the z scanner.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Interpreting PCB concentration... Â§ 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... concentration measured in that sample. If the sample surface concentration is not equal to or lower than...
Development of a Weather Radar Signal Simulator to Examine Sampling Rates and Scanning Schemes
2005-09-01
COMPRESSION AND RANGE AVERAGING AS A MEANS OF RAPIDLY OBTAINING INDEPENDENT SAMPLES...my family’s wellbeing starting the first day of school. Also, I would like to thank PhD. Marielle Gosset of the Institute de Recherche pour le...Limiting the amount of samples required to accomplish small variance when estimating average power calls for independent samples meaning that the return
How old is this bird? The age distribution under some phase sampling schemes.
Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter
2017-04-03
In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
NASA Astrophysics Data System (ADS)
Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan
2015-04-01
Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Anacker, Tony; Hill, J Grant; Friedrich, Joachim
2016-04-21
Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.
Steerable antenna with circular-polarization. 2. Selection of optimal scheme
Abranin, E.P.; Bazelyan, L.L.; Brazhenko, A.I.
1987-11-01
In order to study the sporadic radio emission from the sun a polarimeter operating at 25 MGz was developed and constructed. It employs the steerable antenna array of the URAN-1 radio telescope. The results of numerical calculations of compensation schemes, intended for emission (reception) of circularly polarized waves in an arbitrary direction with the help of crossed dipoles, are presented.
Warwicker, Jim
2004-10-01
Ionizable groups play critical roles in biological processes. Computation of pK(a)s is complicated by model approximations and multiple conformations. Calculated and experimental pK(a)s are compared for relatively inflexible active-site side chains, to develop an empirical model for hydration entropy changes upon charge burial. The modification is found to be generally small, but large for cysteine, consistent with small molecule ionization data and with partial charge distributions in ionized and neutral forms. The hydration model predicts significant entropic contributions for ionizable residue burial, demonstrated for components in the pyruvate dehydrogenase complex. Conformational relaxation in a pH-titration is estimated with a mean-field assessment of maximal side chain solvent accessibility. All ionizable residues interact within a low protein dielectric finite difference (FD) scheme, and more flexible groups also access water-mediated Debye-Hückel (DH) interactions. The DH method tends to match overall pH-dependent stability, while FD can be more accurate for active-site groups. Tolerance for side chain rotamer packing is varied, defining access to DH interactions, and the best fit with experimental pK(a)s obtained. The new (FD/DH) method provides a fast computational framework for making the distinction between buried and solvent-accessible groups that has been qualitatively apparent from previous work, and pK(a) calculations are significantly improved for a mixed set of ionizable residues. Its effectiveness is also demonstrated with computation of the pH-dependence of electrostatic energy, recovering favorable contributions to folded state stability and, in relation to structural genomics, with substantial improvement (reduction of false positives) in active-site identification by electrostatic strain.
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... composite is the measurement for the entire area. For example, when there is a composite of 10 standard wipe... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100 cm2, not just the area in the 10 cm by 10 cm sampled areas. (c) For small surfaces having...
Forward flux sampling-type schemes for simulating rare events: efficiency analysis.
Allen, Rosalind J; Frenkel, Daan; ten Wolde, Pieter Rein
2006-05-21
We analyze the efficiency of several simulation methods which we have recently proposed for calculating rate constants for rare events in stochastic dynamical systems in or out of equilibrium. We derive analytical expressions for the computational cost of using these methods and for the statistical error in the final estimate of the rate constant for a given computational cost. These expressions can be used to determine which method to use for a given problem, to optimize the choice of parameters, and to evaluate the significance of the results obtained. We apply the expressions to the two-dimensional nonequilibrium rare event problem proposed by Maier and Stein [Phys. Rev. E 48, 931 (1993)]. For this problem, our analysis gives accurate quantitative predictions for the computational efficiency of the three methods.
Metcalfe, H; Milne, A E; Webster, R; Lark, R M; Murdoch, A J; Storkey, J
2016-02-01
Weeds tend to aggregate in patches within fields, and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at various scales, the strength of the relations between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We developed a general method that uses novel within-field nested sampling and residual maximum-likelihood (reml) estimation to explore scale-dependent relations between weeds and soil properties. We validated the method using a case study of Alopecurus myosuroides in winter wheat. Using reml, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales, we optimised the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
Ricci, M; Sciarrino, F; Sias, C; De Martini, F
2004-01-30
By a significant modification of the standard protocol of quantum state teleportation, two processes "forbidden" by quantum mechanics in their exact form, the universal NOT gate and the universal optimal quantum cloning machine, have been implemented contextually and optimally by a fully linear method. In particular, the first experimental demonstration of the tele-UNOT gate, a novel quantum information protocol, has been reported. The experimental results are found in full agreement with theory.
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Sampling Policy that Guarantees Reliability of Optimal Policy in Reinforcement Learning
NASA Astrophysics Data System (ADS)
Senda, Kei; Iwasaki, Yoshimitsu; Fujii, Shinji
This study defines the certification sampling that guarantees with specified reliability the optimal policy being correct to the real transition probability, where the optimal policy was derived from a estimated probability. It then discusses the sampling policy as follows that efficiently obtains the certification sampling. The the transition probability is estimated by sampling, and it leads the optimal policy. On the other hand, it calculates the desired accuracy of the estimated transition probability that is necessary to guarantee the correct optimal policy. This study proposes the sampling policy that efficiently achieves the certification sampling with the desired accuracy of the estimated transition probability. The proposed method is efficient in number of samples because it automatically selects states and actions to be sampled and stops sampling when the condition is satisfied.
TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM DW-MRI USING OPTIMAL SAMPLING LATTICES
Ye, Wenxing; Entezari, Alireza; Vemuri, Baba C.
2010-01-01
This paper exploits the power of optimal sampling lattices in tomography based reconstruction of the diffusion propagator in diffusion weighted magnetic resonance imaging (DWMRI). Optimal sampling leads to increased accuracy of the tomographic reconstruction approach introduced by Pickalov and Basser [1]. Alternatively, the optimal sampling geometry allows for further reducing the number of samples while maintaining the accuracy of reconstruction of the diffusion propagator. The optimality of the proposed sampling geometry comes from the information theoretic advantages of sphere packing lattices in sampling multidimensional signals. These advantages are in addition to those accrued from the use of the tomographic principle used here for reconstruction. We present comparative results of reconstructions of the diffusion propagator using the Cartesian and the optimal sampling geometry for synthetic and real data sets. PMID:20596298
NASA Astrophysics Data System (ADS)
Kala, J.; De Kauwe, M. G.; Pitman, A. J.; Lorenz, R.; Medlyn, B. E.; Wang, Y.-P.; Lin, Y.-S.; Abramowitz, G.
2015-12-01
We implement a new stomatal conductance scheme, based on the optimality approach, within the Community Atmosphere Biosphere Land Exchange (CABLEv2.0.1) land surface model. Coupled land-atmosphere simulations are then performed using CABLEv2.0.1 within the Australian Community Climate and Earth Systems Simulator (ACCESSv1.3b) with prescribed sea surface temperatures. As in most land surface models, the default stomatal conductance scheme only accounts for differences in model parameters in relation to the photosynthetic pathway but not in relation to plant functional types. The new scheme allows model parameters to vary by plant functional type, based on a global synthesis of observations of stomatal conductance under different climate regimes over a wide range of species. We show that the new scheme reduces the latent heat flux from the land surface over the boreal forests during the Northern Hemisphere summer by 0.5-1.0 mm day-1. This leads to warmer daily maximum and minimum temperatures by up to 1.0 °C and warmer extreme maximum temperatures by up to 1.5 °C. These changes generally improve the climate model's climatology of warm extremes and improve existing biases by 10-20 %. The bias in minimum temperatures is however degraded but, overall, this is outweighed by the improvement in maximum temperatures as there is a net improvement in the diurnal temperature range in this region. In other regions such as parts of South and North America where ACCESSv1.3b has known large positive biases in both maximum and minimum temperatures (~ 5 to 10 °C), the new scheme degrades this bias by up to 1 °C. We conclude that, although several large biases remain in ACCESSv1.3b for temperature extremes, the improvements in the global climate model over large parts of the boreal forests during the Northern Hemisphere summer which result from the new stomatal scheme, constrained by a global synthesis of experimental data, provide a valuable advance in the long-term development
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
Optimal complexity scalable H.264/AVC video decoding scheme for portable multimedia devices
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Park, Younghyeon; Jeon, Byeungwoo
2013-07-01
Limited computing resources in portable multimedia devices are an obstacle in real-time video decoding of high resolution and/or high quality video contents. Ordinary H.264/AVC video decoders cannot decode video contents that exceed the limits set by their processing resources. However, in many real applications especially on portable devices, a simplified decoding with some acceptable degradation may be desirable instead of just refusing to decode such contents. For this purpose, a complexity-scalable H.264/AVC video decoding scheme is investigated in this paper. First, several simplified methods of decoding tools that have different characteristics are investigated to reduce decoding complexity and consequential degradation of reconstructed video. Then a complexity scalable H.264/AVC decoding scheme is designed by selectively combining effective simplified methods to achieve the minimum degradation. Experimental results with the H.264/AVC main profile bitstream show that its decoding complexity can be scalably controlled, and reduced by up to 44% without subjective quality loss.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
Optimization of sample size in controlled experiments: the CLAST rule.
Botella, Juan; Ximénez, Carmen; Revuelta, Javier; Suero, Manuel
2006-02-01
Sequential rules are explored in the context of null hypothesis significance testing. Several studies have demonstrated that the fixed-sample stopping rule, in which the sample size used by researchers is determined in advance, is less practical and less efficient than sequential stopping rules. It is proposed that a sequential stopping rule called CLAST (composite limited adaptive sequential test) is a superior variant of COAST (composite open adaptive sequential test), a sequential rule proposed by Frick (1998). Simulation studies are conducted to test the efficiency of the proposed rule in terms of sample size and power. Two statistical tests are used: the one-tailed t test of mean differences with two matched samples, and the chi-square independence test for twofold contingency tables. The results show that the CLAST rule is more efficient than the COAST rule and reflects more realistically the practice of experimental psychology researchers.
Optimal Scheme for Search State Space and Scheduling on Multiprocessor Systems
NASA Astrophysics Data System (ADS)
Youness, Hassan A.; Sakanushi, Keishi; Takeuchi, Yoshinori; Salem, Ashraf; Wahdan, Abdel-Moneim; Imai, Masaharu
A scheduling algorithm aims to minimize the overall execution time of the program by properly allocating and arranging the execution order of the tasks on the core processors such that the precedence constraints among the tasks are preserved. In this paper, we present a new scheduling algorithm by using geometry analysis of the Task Precedence Graph (TPG) based on A* search technique and uses a computationally efficient cost function for guiding the search with reduced complexity and pruning techniques to produce an optimal solution for the allocation/scheduling problem of a parallel application to parallel and multiprocessor architecture. The main goal of this work is to significantly reduce the search space and achieve the optimality or near optimal solution. We implemented the algorithm on general task graph problems that are processed on most of related search work and obtain the optimal scheduling with a small number of states. The proposed algorithm reduced the exhaustive search by at least 50% of search space. The viability and potential of the proposed algorithm is demonstrated by an illustrative example.
NASA Astrophysics Data System (ADS)
da Jornada, Felipe H.; Qiu, Diana Y.; Louie, Steven G.
2017-01-01
First-principles calculations based on many-electron perturbation theory methods, such as the ab initio G W and G W plus Bethe-Salpeter equation (G W -BSE) approach, are reliable ways to predict quasiparticle and optical properties of materials, respectively. However, these methods involve more care in treating the electron-electron interaction and are considerably more computationally demanding when applied to systems with reduced dimensionality, since the electronic confinement leads to a slower convergence of sums over the Brillouin zone due to a much more complicated screening environment that manifests in the "head" and "neck" elements of the dielectric matrix. Here we present two schemes to sample the Brillouin zone for G W and G W -BSE calculations: the nonuniform neck subsampling method and the clustered sampling interpolation method, which can respectively be used for a family of single-particle problems, such as G W calculations, and for problems involving the scattering of two-particle states, such as when solving the BSE. We tested these methods on several few-layer semiconductors and graphene and show that they perform a much more efficient sampling of the Brillouin zone and yield two to three orders of magnitude reduction in the computer time. These two methods can be readily incorporated into several ab initio packages that compute electronic and optical properties through the G W and G W -BSE approaches.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Optimal block sampling of routine, non-tumorous gallbladders.
Wong, Newton Acs
2017-03-08
Gallbladders are common specimens in routine histopathological practice and there are, at least in the United Kingdom and Australia, national guidance on how to sample gallbladders without macroscopically-evident, focal lesions/tumours (hereafter referred to as non-tumorous gallbladders).(1) Nonetheless, this author has seen considerable variation in the numbers of blocks used and the parts of the gallbladder sampled, even within one histopathology department. The recently re-issued 'Tissue pathways for gastrointestinal and pancreatobiliary pathology' from the Royal College of Pathologists (RCPath), first recommends sampling of the cystic duct margin and "at least one section each of neck, body and any focal lesion".(1) This recommendation is referenced by a textbook chapter which itself proposes that "cross-sections of the gallbladder fundus and lateral wall should be submitted, along with the sections from the neck of the gallbladder and cystic duct, including its margin".(2) This article is protected by copyright. All rights reserved.
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations.
A combinatorial optimization scheme for parameter structure identification in ground water modeling.
Tsai, Frank T C; Sun, Ne-Zheng; Yeh, William W G
2003-01-01
This research develops a methodology for parameter structure identification in ground water modeling. For a given set of observations, parameter structure identification seeks to identify the parameter dimension, its corresponding parameter pattern and values. Voronoi tessellation is used to parameterize the unknown distributed parameter into a number of zones. Accordingly, the parameter structure identification problem is equivalent to finding the number and locations as well as the values of the basis points associated with the Voronoi tessellation. A genetic algorithm (GA) is allied with a grid search method and a quasi-Newton algorithm to solve the inverse problem. GA is first used to search for the near-optimal parameter pattern and values. Next, a grid search method and a quasi-Newton algorithm iteratively improve the GA's estimates. Sensitivities of state variables to parameters are calculated by the sensitivity-equation method. MODFLOW and MT3DMS are employed to solve the coupled flow and transport model as well as the derived sensitivity equations. The optimal parameter dimension is determined using criteria based on parameter uncertainty and parameter structure discrimination. Numerical experiments are conducted to demonstrate the proposed methodology, in which the true transmissivity field is characterized by either a continuous distribution or a distribution that can be characterized by zones. We conclude that the optimized transmissivity zones capture the trend and distribution of the true transmissivity field.
Time-optimal path planning in dynamic flows using level set equations: theory and schemes
NASA Astrophysics Data System (ADS)
Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.
2014-10-01
We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Optimization conditions of samples saponification for tocopherol analysis.
Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto
2014-09-01
A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h.
Optimizing analog-to-digital converters for sampling extracellular potentials.
Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan
2012-01-01
In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center
Optimal weighting scheme for suppressing cascades and traffic congestion in complex networks.
Yang, Rui; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong
2009-02-01
This paper is motivated by the following two related problems in complex networks: (i) control of cascading failures and (ii) mitigation of traffic congestion. Both problems are of significant recent interest as they address, respectively, the security of and efficient information transmission on complex networks. Taking into account typical features of load distribution and weights in real-world networks, we have discovered an optimal solution to both problems. In particular, we shall provide numerical evidence and theoretical analysis that, by choosing a proper weighting parameter, a maximum level of robustness against cascades and traffic congestion can be achieved, which practically rids the network of occurrences of the catastrophic dynamics.
Inference for Optimal Dynamic Treatment Regimes using an Adaptive m-out-of-n Bootstrap Scheme
Chakraborty, Bibhas; Laber, Eric B.; Zhao, Yingqi
2013-01-01
Summary A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much more simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. PMID:23845276
Comparison of Optimized Soft-Tissue Suppression Schemes for Ultra-short Echo Time (UTE) MRI
Li, Cheng; Magland, Jeremy F.; Rad, Hamidreza Saligheh; Song, Hee Kwon; Wehrli, Felix W.
2011-01-01
Ultra-short echo time (UTE) imaging with soft-tissue suppression reveals short-T2 components (typically hundreds of microseconds to milliseconds) ordinarily not captured or obscured by long-T2 tissue signals on the order of tens of milliseconds or longer. The technique therefore enables visualization and quantification of short-T2 proton signals such as those in highly collagenated connective tissues. This work compares the performance of the three most commonly used long-T2 suppression UTE sequences, i.e. echo subtraction (dual-echo UTE), saturation via dual-band saturation pulses (dual-band UTE), and inversion by adiabatic inversion pulses (IR-UTE) at 3T, via Bloch simulations and experimentally in vivo in the lower extremities of test subjects. For unbiased performance comparison, the acquisition parameters are optimized individually for each sequence to maximize short-T2 SNR and CNR between short- and long-T2 components. Results show excellent short-T2 contrast is achieved with these optimized sequences. A combination of dual-band UTE with dual-echo UTE provides good short-T2 SNR and CNR with less sensitivity to B1 homogeneity. IR-UTE has the lowest short-T2 SNR efficiency but provides highly uniform short-T2 contrast and is well suited for imaging short-T2 species with relatively short T1 such as bone water. PMID:22161636
Müller, Hans-Helge; Pahl, Roman; Schäfer, Helmut
2007-12-01
We propose optimized two-stage designs for genome-wide case-control association studies, using a hypothesis testing paradigm. To save genotyping costs, the complete marker set is genotyped in a sub-sample only (stage I). On stage II, the most promising markers are then genotyped in the remaining sub-sample. In recent publications, two-stage designs were proposed which minimize the overall genotyping costs. To achieve full design optimization, we additionally include sampling costs into both the cost function and the design optimization. The resulting optimal designs differ markedly from those optimized for genotyping costs only (partially optimized designs), and achieve considerable further cost reductions. Compared with partially optimized designs, fully optimized two-stage designs have higher first-stage sample proportion. Furthermore, the increment of the sample size over the one-stage design, which is necessary in two-stage designs in order to compensate for the loss of power due to partial genotyping, is less pronounced for fully optimized two-stage designs. In addition, we address the scenario where the investigator is interested to gain as much information as possible, however is restricted in terms of a budget. In that we develop two-stage designs that maximize the power under a certain cost constraint.
Optimized Sampling Strategies For Non-Proliferation Monitoring: Report
Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.
2015-10-20
Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
Roy, R; Sevick-Muraca, E
1999-05-10
The development of non-invasive, biomedical optical imaging from time-dependent measurements of near-infrared (NIR) light propagation in tissues depends upon two crucial advances: (i) the instrumental tools to enable photon "time-of-flight" measurement within rapid and clinically realistic times, and (ii) the computational tools enabling the reconstruction of interior tissue optical property maps from exterior measurements of photon "time-of-flight" or photon migration. In this contribution, the image reconstruction algorithm is formulated as an optimization problem in which an interior map of tissue optical properties of absorption and fluorescence lifetime is reconstructed from synthetically generated exterior measurements of frequency-domain photon migration (FDPM). The inverse solution is accomplished using a truncated Newtons method with trust region to match synthetic fluorescence FDPM measurements with that predicted by the finite element prediction. The computational overhead and error associated with computing the gradient numerically is minimized upon using modified techniques of reverse automatic differentiation.
A ground-state-directed optimization scheme for the Kohn-Sham energy.
Høst, Stinne; Jansík, Branislav; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Helgaker, Trygve
2008-09-21
Kohn-Sham density-functional calculations are used in many branches of science to obtain information about the electronic structure of molecular systems and materials. Unfortunately, the traditional method for optimizing the Kohn-Sham energy suffers from fundamental problems that may lead to divergence or, even worse, convergence to an energy saddle point rather than to the ground-state minimum--in particular, for the larger and more complicated electronic systems that are often studied by Kohn-Sham theory nowadays. We here present a novel method for Kohn-Sham energy minimization that does not suffer from the flaws of the conventional approach, combining reliability and efficiency with linear complexity. In particular, the proposed method converges by design to a minimum, avoiding the sometimes spurious solutions of the traditional method and bypassing the need to examine the structure of the provided solution.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
NASA Astrophysics Data System (ADS)
Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.
2016-10-01
A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.
NASA Astrophysics Data System (ADS)
Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon
Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)
Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz
2014-01-01
Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636
Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P
2015-07-01
Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates.
NASA Astrophysics Data System (ADS)
Back, Pär-Erik
2007-04-01
A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.
Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao
2016-12-01
Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Ghose, Arup K; Ott, Gregory R; Hudkins, Robert L
2017-01-18
At the discovery stage, it is important to understand the drug design concepts for a CNS drug compared to those for a non-CNS drug. Previously, we published on ideal CNS drug property space and defined in detail the physicochemical property distribution of CNS versus non-CNS oral drugs, the application of radar charting (a graphical representation of multiple physicochemical properties used during CNS lead optimization), and a recursive partition classification tree to differentiate between CNS- and non-CNS drugs. The objective of the present study was to further understand the differentiation of physicochemical properties between CNS and non-CNS oral drugs by the development and application of a new CNS scoring scheme: Technically Extended MultiParameter Optimization (TEMPO). In this multiparameter method, we identified eight key physicochemical properties critical for accurately assessing CNS druggability: (1) number of basic amines, (2) carbon-heteroatom (non-carbon, non-hydrogen) ratio, (3) number of aromatic rings, (4) number of chains, (5) number of rotatable bonds, (6) number of H-acceptors, (7) computed octanol/water partition coefficient (AlogP), and (8) number of nonconjugated C atoms in nonaromatic rings. Significant features of the CNS-TEMPO penalty score are the extension of the multiparameter approach to generate an accurate weight factor for each physicochemical property, the use of limits on both sides of the computed property space range during the penalty calculation, and the classification of CNS and non-CNS drug scores. CNS-TEMPO significantly outperformed CNS-MPO and the Schrödinger QikProp CNS parameter (QP_CNS) in evaluating CNS drugs and has been extensively applied in support of CNS lead optimization programs.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Ogungbenro, Kayode; Aarons, Leon
2009-01-01
This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Astrophysics Data System (ADS)
Shiau, Jenq-Tzong; Wu, Fu-Chun
2007-06-01
The temporal variations of natural flows are essential elements for preserving the ecological health of a river which are addressed in this paper by the environmental flow schemes that incorporate the intra-annual and interannual variability of the natural flow regime. We present an optimization framework to find the Pareto-optimal solutions for various flow schemes. The proposed framework integrates (1) the range of variability approach for evaluating the hydrologic alterations; (2) the standardized precipitation index approach for establishing the variation criteria for the wet, normal, and dry years; (3) a weir operation model for simulating the system of flows; and (4) a multiobjective optimization genetic algorithm for search of the Pareto-optimal solutions. The proposed framework is applied to the Kaoping diversion weir in Taiwan. The results reveal that the time-varying schemes incorporating the intra-annual variability in the environmental flow prescriptions promote the ecosystem and human needs fitness. Incorporation of the interannual flow variability using different criteria established for three types of water year further promotes both fitnesses. The merit of incorporating the interannual variability may be superimposed on that of incorporating only the intra-annual flow variability. The Pareto-optimal solutions searched with a limited range of flows replicate satisfactorily those obtained with a full search range. The limited-range Pareto front may be used as a surrogate of the full-range one if feasible prescriptions are to be found among the regular flows.
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.
Sample Optimization for Five Plant-Parasitic Nematodes in an Alfalfa Field
Goodell, P. B.; Ferris, H.
1981-01-01
A data base representing nematode counts and soil weight from 1,936 individual soil cores taken from a 7-ha alfalfa field was used to investigate sample optimization for five plant-parasitic nematodes: Meloidogyne arenaria, Pratylenchus minyus, Merlinius brevidens, Helicotylenchus digonicus, and Paratrichodorus minor. Sample plans were evaluated by the accuracy and reliability of their estimation of the population and by the cost of collecting, processing, and counting the samples. Interactive FORTRAN programs were constructed to simulate four collecting patterns: random; division of the field into square sub-units (cells); and division of the field into rectangular sub-traits (strips) running in two directions. Depending on the pattern, sample numbers varied from 1 to 25 with each sample representing from 1 to 50 cores. Each pattern, sample, and core combination was replicated 50 times. Strip stratification north/south was the most optimal sampling pattern in this field because it isolated a streak of fine-textured soil. The mathematical optimmn was not found because of data range limitations. When practical economic time constraints (5 hr to collect, process, and count nematode samples) are placed on the optimization process, all species estimates deviate no more than 25 % from the true mean. If accuracy constraints are placed on the process (no more than 15% deviation from true field mean), all species except Merlinius required less than 5 hr to complete the sample process. PMID:19300768
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Sample size calculation for testing differences between cure rates with the optimal log-rank test.
Wu, Jianrong
2017-01-01
In this article, sample size calculations are developed for use when the main interest is in the differences between the cure rates of two groups. Following the work of Ewell and Ibrahim, the asymptotic distribution of the weighted log-rank test is derived under the local alternative. The optimal log-rank test under the proportional distributions alternative is discussed, and sample size formulas for the optimal and standard log-rank tests are derived. Simulation results show that the proposed formulas provide adequate sample size estimation for trial designs and that the optimal log-rank test is more efficient than the standard log-rank test, particularly when both cure rates and percentages of censoring are small.
Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-01-01
Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
NASA Astrophysics Data System (ADS)
Kiesewetter, Simon; Drummond, Peter D.
2017-03-01
A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater
Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal
2016-01-01
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-05
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.
Validation of genetic algorithm-based optimal sampling for ocean data assimilation
NASA Astrophysics Data System (ADS)
Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.
2016-10-01
Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.
Shen, Xiong; Zong, Chao; Zhang, Guoqiang
2012-01-01
Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.
NASA Astrophysics Data System (ADS)
Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.
2016-11-01
To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.
A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization
Tian, Shulin; Yang, Chenglin; Liu, Cheng
2016-01-01
The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
NASA Astrophysics Data System (ADS)
Kong, Weijing; Wan, Yuhang; Du, Kun; Zhao, Wenhui; Wang, Shuang; Zheng, Zheng
2016-11-01
The reflected intensity change of the Bloch-surface-wave (BSW) resonance influenced by the loss of a truncated onedimensional photonic crystal structure is numerically analyzed and studied in order to enhance the sensitivity of the Bloch-surface-wave-based sensors. The finite truncated one-dimensional photonic crystal structure is designed to be able to excite BSW mode for water (n=1.33) as the external medium and for p-polarized plane wave incident light. The intensity interrogation scheme which can be operated on a typical Kretschmann prism-coupling configuration by measuring the reflected intensity change of the resonance dip is investigated to optimize the sensitivity. A figure of merit (FOM) is introduced to measure the performance of the one-dimensional photonic crystal multilayer structure under the scheme. The detection sensitivities are calculated under different device parameters with a refractive index change corresponding to different solutions of glycerol in de-ionized (DI)-water. The results show that the intensity sensitivity curve varies similarly with the FOM curve and the sensitivity of the Bloch-surface-wave sensor is greatly affected by the device loss, where an optimized loss value can be got. For the low-loss BSW devices, the intensity interrogation sensing sensitivity may drop sharply from the optimal value. On the other hand, the performance of the detection scheme is less affected by the higher device loss. This observation is in accordance with BSW experimental sensing demonstrations as well. The results obtained could be useful for improving the performance of the Bloch-surface-wave sensors for the investigated sensing scheme.
A method to optimize sampling locations for measuring indoor air distributions
NASA Astrophysics Data System (ADS)
Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan
2015-02-01
Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.
Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik
2016-04-01
The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.
Sample volume optimization for radon-in-water detection by liquid scintillation counting.
Schubert, Michael; Kopitz, Juergen; Chałupnik, Stanisław
2014-08-01
Radon is used as environmental tracer in a wide range of applications particularly in aquatic environments. If liquid scintillation counting (LSC) is used as detection method the radon has to be transferred from the water sample into a scintillation cocktail. Whereas the volume of the cocktail is generally given by the size of standard LSC vials (20 ml) the water sample volume is not specified. Aim of the study was an optimization of the water sample volume, i.e. its minimization without risking a significant decrease in LSC count-rate and hence in counting statistics. An equation is introduced, which allows calculating the ²²²Rn concentration that was initially present in a water sample as function of the volumes of water sample, sample flask headspace and scintillation cocktail, the applicable radon partition coefficient, and the detected count-rate value. It was shown that water sample volumes exceeding about 900 ml do not result in a significant increase in count-rate and hence counting statistics. On the other hand, sample volumes that are considerably smaller than about 500 ml lead to noticeably lower count-rates (and poorer counting statistics). Thus water sample volumes of about 500-900 ml should be chosen for LSC radon-in-water detection, if 20 ml vials are applied.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples
Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
NASA Astrophysics Data System (ADS)
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that
Sturkenboom, Marieke G. G.; Mulder, Leonie W.; de Jager, Arthur; van Altena, Richard; Aarnoutse, Rob E.; de Lange, Wiel C. M.; Proost, Johannes H.; Kosterink, Jos G. W.; van der Werf, Tjip S.
2015-01-01
Rifampin, together with isoniazid, has been the backbone of the current first-line treatment of tuberculosis (TB). The ratio of the area under the concentration-time curve from 0 to 24 h (AUC0–24) to the MIC is the best predictive pharmacokinetic-pharmacodynamic parameter for determinations of efficacy. The objective of this study was to develop an optimal sampling procedure based on population pharmacokinetics to predict AUC0–24 values. Patients received rifampin orally once daily as part of their anti-TB treatment. A one-compartmental pharmacokinetic population model with first-order absorption and lag time was developed using observed rifampin plasma concentrations from 55 patients. The population pharmacokinetic model was developed using an iterative two-stage Bayesian procedure and was cross-validated. Optimal sampling strategies were calculated using Monte Carlo simulation (n = 1,000). The geometric mean AUC0–24 value was 41.5 (range, 13.5 to 117) mg · h/liter. The median time to maximum concentration of drug in serum (Tmax) was 2.2 h, ranging from 0.4 to 5.7 h. This wide range indicates that obtaining a concentration level at 2 h (C2) would not capture the peak concentration in a large proportion of the population. Optimal sampling using concentrations at 1, 3, and 8 h postdosing was considered clinically suitable with an r2 value of 0.96, a root mean squared error value of 13.2%, and a prediction bias value of −0.4%. This study showed that the rifampin AUC0–24 in TB patients can be predicted with acceptable accuracy and precision using the developed population pharmacokinetic model with optimal sampling at time points 1, 3, and 8 h. PMID:26055359
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
Morley, Shannon M.; Seiner, Brienne N.; Finn, Erin C.; Greenwood, Lawrence R.; Smith, Steven C.; Gregory, Stephanie J.; Haney, Morgan M.; Lucas, Dawn D.; Arrigo, Leah M.; Beacham, Tere A.; Swearingen, Kevin J.; Friese, Judah I.; Douglas, Matthew; Metz, Lori A.
2015-05-01
Mixed fission and activation materials resulting from various nuclear processes and events contain a wide range of isotopes for analysis spanning almost the entire periodic table. In some applications such as environmental monitoring, nuclear waste management, and national security a very limited amount of material is available for analysis and characterization so an integrated analysis scheme is needed to measure multiple radionuclides from one sample. This work describes the production of a complex synthetic sample containing fission products, activation products, and irradiated soil and determines the percent recovery of select isotopes through the integrated chemical separation scheme. Results were determined using gamma energy analysis of separated fractions and demonstrate high yields of Ag (76 ± 6%), Au (94 ± 7%), Cd (59 ± 2%), Co (93 ± 5%), Cs (88 ± 3%), Fe (62 ± 1%), Mn (70 ± 7%), Np (65 ± 5%), Sr (73 ± 2%) and Zn (72 ± 3%). Lower yields (< 25%) were measured for Ga, Ir, Sc, and W. Based on the results of this experiment, a complex synthetic sample can be prepared with low atom/fission ratios and isotopes of interest accurately and precisely measured following an integrated chemical separation method.
JR Bontha; GR Golcar; N Hannigan
2000-08-29
The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.
Frazier, M T; Finley, J; Harkness, W; Rajotte, E G
2000-06-01
The introduction of parasitic honey bee mites, the tracheal mite, Acarapis woodi (Rennie) in 1984 and the Varroa mite, Varroa jacobsoni, in 1987, has dramatically increased the winter mortality of honey bee, Apis mellifera L., colonies in many areas of the United States. Some beekeepers have minimized their losses by routinely treating their colonies with menthol, currently the only Environmental Protection Agency-approved and available chemical for tracheal mite control. Menthol is also expensive and can interfere with honey harvesting. Because of inadequate sampling techniques and a lack of information concerning treatment, this routine treatment strategy has increased the possibility that tracheal mites will develop resistance to menthol. It is important to establish economic thresholds and treat colonies with menthol only when treatment is warranted rather than treating all colonies regardless of infestation level. The use of sequential sampling may reduce the amount of time and effort expended in examining individual colonies and determining if treatment is necessary. Sequential sampling also allows statistically based estimates of the percentage of bees in standard Langstroth hives infested with mites while controlling for the possibility of incorrectly assessing the amount of infestation. On the average, sequential sampling plans require fewer observations (bees) to reach a decision for specified probabilities of type I and type II errors than are required for fixed sampling plans, especially when the proportion of infested bees is either very low or very high. We developed a sequential sampling decision plan to allow the user to choose specific economic injury levels and the probability of making type I and type II errors which can result inconsiderable savings in time, labor and expense.
Beliaeff, B.; Claisse, D.; Smith, P.J.
1995-12-31
In the French Monitoring Network, trace element and organic concentration in biota has been measured for 15 years on a quarterly basis at over 80 sites scattered along the French coastline. A reduction in the sampling effort may be needed as a result of budget restrictions. A constant budget, however, would allow the advancement of certain research and development projects, such as the feasibility of new chemical analysis. The basic problem confronting the program sampling design optimization is finding optimal numbers of sites in a given non-heterogeneous area and of sampling events within a year at each site. First, they determine a site specific cost function integrating analysis, personnel, and computer costs. Then, within-year and between-site variance components are estimated from the results of a linear model which includes a seasonal component. These two steps provide a cost-precision optimum for each contaminant. An example is given using the data from the 4 sites of the Loire estuary. Over all sites, significant `U`-shaped trends are estimated for Pb, PCBs, {Sigma}DDT and {alpha}-HCH, while PAHs show a significant inverted `U`-shaped curve. For most chemicals the within-year variance appears to be much higher than the between sites variance. This leads to the conclusion that, for this case, reducing the number of sites by two is preferable economically and in terms of monitoring efficiency to reducing the sampling frequency by the same factor. Further implications for the French Monitoring Network are discussed.
Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.
Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno
2010-07-01
The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.
Optimized Ar(+)-ion milling procedure for TEM cross-section sample preparation.
Dieterle, Levin; Butz, Benjamin; Müller, Erich
2011-11-01
High-quality samples are indispensable for every reliable transmission electron microscopy (TEM) investigation. In order to predict optimized parameters for the final Ar(+)-ion milling preparation step, topographical changes of symmetrical cross-section samples by the sputtering process were modeled by two-dimensional Monte-Carlo simulations. Due to its well-known sputtering yield of Ar(+)-ions and its easiness in mechanical preparation Si was used as model system. The simulations are based on a modified parameterized description of the sputtering yield of Ar(+)-ions on Si summarized from literature. The formation of a wedge-shaped profile, as commonly observed during double-sector ion milling of cross-section samples, was reproduced by the simulations, independent of the sputtering angle. Moreover, the preparation of wide, plane parallel sample areas by alternating single-sector ion milling is predicted by the simulations. These findings were validated by a systematic ion-milling study (single-sector vs. double-sector milling at various sputtering angles) using Si cross-section samples as well as two other material-science examples. The presented systematic single-sector ion-milling procedure is applicable for most Ar(+)-ion mills, which allow simultaneous milling from both sides of a TEM sample (top and bottom) in an azimuthally restricted sector perpendicular to the central epoxy line of that cross-sectional TEM sample. The procedure is based on the alternating milling of the two halves of the TEM sample instead of double-sector milling of the whole sample. Furthermore, various other practical aspects are issued like the dependency of the topographical quality of the final sample on parameters like epoxy thickness and incident angle.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
NASA Astrophysics Data System (ADS)
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki
2015-11-15
Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
Analysis of the optimal sampling rate for state estimation in sensor networks with delays.
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo
2017-03-27
When addressing the problem of state estimation in sensor networks, the effects of communications on estimator performance are often neglected. High accuracy requires a high sampling rate, but this leads to higher channel load and longer delays, which in turn worsens estimation performance. This paper studies the problem of determining the optimal sampling rate for state estimation in sensor networks from a theoretical perspective that takes into account traffic generation, a model of network behaviour and the effect of delays. Some theoretical results about Riccati and Lyapunov equations applied to sampled systems are derived, and a solution was obtained for the ideal case of perfect sensor information. This result is also interesting for non-ideal sensors, as in some cases it works as an upper bound of the optimisation solution.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil
Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W
2016-01-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.
Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon
2005-12-01
The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.
Stauffer, Eric
2006-09-01
This paper reviews the literature on the analysis of vegetable (and animal) oil residues from fire debris samples. The examination sequence starts with the solvent extraction of the residues from the substrate. The extract is then prepared for instrumental analysis by derivatizing fatty acids (FAs) into fatty acid methyl esters. The analysis is then carried out by gas chromatography or gas chromatography-mass spectrometry. The interpretation of the results is a difficult operation seriously limited by a lack of research on the subject. The present data analysis scheme utilizes FA ratios to determine the presence of vegetable oils and their propensity to self-heat and possibly, to spontaneously ignite. Preliminary work has demonstrated that it is possible to detect chemical compounds specific to an oil that underwent spontaneous ignition. Guidelines to conduct future research in the analysis of vegetable oil residues from fire debris samples are also presented.
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.
2016-06-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.
Tavakoli, Rouhollah
2016-01-01
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.
Madisch, Ijad; Wölfel, Roman; Harste, Gabi; Pommer, Heidi; Heim, Albert
2006-09-01
Precise typing of human adenoviruses (HAdV) is fundamental for epidemiology and the detection of infection chains. As only few of the 51 adenovirus types are associated with life- threatening disseminated diseases in immunodeficient patients, detection of one of these types may have prognostic value and lead to immediate therapeutic intervention. A recently published molecular typing scheme consisting of two steps (sequencing of a generic PCR product closely adjacent to loop 1 of the main neutralization determinant epsilon, and for species HAdV-B, -C, and -D the sequencing of loop 2 [Madisch et al., 2005]) was applied to 119 clinical samples. HAdV DNA was typed unequivocally even in cases of culture negative samples, for example in immunodeficient patients before HAdV causes high virus loads and disseminated disease. Direct typing results demonstrated the predominance of HAdV-1, -2, -5, and -31 in immunodeficient patients suggesting the significance of the persistence of these viruses for the pathogenesis of disseminated disease. In contrast, HAdV-3 predominated in immunocompetent patients and cocirculation of four subtypes was demonstrated. Typing of samples from a conjunctivitis outbreak in multiple military barracks demonstrated various HAdV types (2, 4, 8, 19) and not the suspected unique adenovirus etiology. This suggests that our molecular typing scheme will be also useful for epidemiological investigations. In conclusion, our two-step molecular typing system will permit the precise and rapid typing of clinical HAdV isolates and even of HAdV DNA in clinical samples without the need of time-consuming virus isolation prior to typing.
Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
An S/H circuit with parasitics optimized for IF-sampling
NASA Astrophysics Data System (ADS)
Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue
2016-06-01
An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).
1980-03-01
Deployer’s Cheating Strategy 8 3.3 Characteristics 9 3.3.1 Legal Distribution 9 3.3.2 MCPD Distribution ll 4 Cooper’s Sample and Search 15 4.1...MISSILES 1 2 3 4 0 Illegal Missiles 120 Illegal Missiles Number of Expected Expected Missiles Number Number MCPD * In One Set Of Sets Of Sets Declaration...0 71 37 52 1 76 67 96 2 38 54 52 3 12 28 0 4 3 10 0 Total 200 196 (-200) 200 * MCPD = Minimum Conon Probability of Detection if variations from this
Easton, D.F.; Goldgar, D.E.
1994-09-01
As genes underlying susceptibility to human disease are identified through linkage analysis, it is becoming increasingly clear that genetic heterogeneity is the rule rather than the exception. The focus of the present work is to examine the power and optimal sampling design for localizing a second disease gene when one disease gene has previously been identified. In particular, we examined the case when the unknown locus had lower penetrance, but higher frequency, than the known locus. Three scenarios regarding knowledge about locus 1 were examined: no linkage information (i.e. standard heterogeneity analysis), tight linkage with a known highly polymorphic marker locus, and mutation testing. Exact expected LOD scores (ELODs) were calculated for a number of two-locus genetic models under the 3 scenarios of heterogeneity for nuclear families containing 2, 3 or 4 affected children, with 0 or 1 affected parents. A cost function based upon the cost of ascertaining and genotyping sufficient samples to achieve an ELOD of 3.0 was used to evaluate the designs. As expected, the power and the optimal pedigree sampling strategy was dependent on the underlying model and the heterogeneity testing status. When the known locus had higher penetrance than the unknown locus, three affected siblings with unaffected parents proved to be optimal for all levels of heterogeneity. In general, mutation testing at the first locus provided substantially more power for detecting the second locus than linkage evidence alone. However, when both loci had relatively low penetrance, mutation testing provided little improvement in power since most families could be expected to be segregating the high risk allele at both loci.
Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization
Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Optimization of multi-channel neutron focusing guides for extreme sample environments
NASA Astrophysics Data System (ADS)
Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.
2014-07-01
In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.
In-line e-beam inspection with optimized sampling and newly developed ADC
NASA Astrophysics Data System (ADS)
Ikota, Masami; Miura, Akihiro; Fukunishi, Munenori; Hiroi, Takashi; Sugimoto, Aritoshi
2003-07-01
An electron beam inspection is strongly required for HARI to detect contact and via defects that an optical inspection cannot detect. Conventionally, an e-beam inspection system is used as an analytical tool for checking the process margin. Due to its low throughput speed, it has not been used for in-line QC. Therefore, we optimized the inspection area and developed a new auto defect classification (ADC) to use with e-beam inspection as an in-line inspection tool. A 10% interval scan sampling proved able to estimate defect densities. Inspection could be completed within 1 hour. We specifically adapted the developed ADC for use with e-beam inspection because the voltage contrast images were not sufficiently clear so that classifications could not be made with conventional ADC based on defect geometry. The new ADC used the off-pattern area of the defect to discriminate particles from other voltage contrast defects with an accuracy of greater than 90%. Using sampling optimization and the new ADC, we achieved inspection and auto defect review with throughput of less than 1 and one-half hours. We implemented the system as a procedure for product defect QC and proved its effectiveness for in-line e-beam inspection.
Mimicry among Unequally Defended Prey Should Be Mutualistic When Predators Sample Optimally.
Aubier, Thomas G; Joron, Mathieu; Sherratt, Thomas N
2017-03-01
Understanding the conditions under which moderately defended prey evolve to resemble better-defended prey and whether this mimicry is parasitic (quasi-Batesian) or mutualistic (Müllerian) is central to our understanding of warning signals. Models of predator learning generally predict quasi-Batesian relationships. However, predators' attack decisions are based not only on learning alone but also on the potential future rewards. We identify the optimal sampling strategy of predators capable of classifying prey into different profitability categories and contrast the implications of these rules for mimicry evolution with a classical Pavlovian model based on conditioning. In both cases, the presence of moderately unprofitable mimics causes an increase in overall consumption. However, in the case of the optimal sampling strategy, this increase in consumption is typically outweighed by the increase in overall density of prey sharing the model appearance (a dilution effect), causing a decrease in mortality. It suggests that if predators forage efficiently to maximize their long-term payoff, genuine quasi-Batesian mimicry should be rare, which may explain the scarcity of evidence for it in nature. Nevertheless, we show that when moderately defended mimics are profitable to attack by hungry predators, then they can be parasitic on their models, just as classical Batesian mimics are.
Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang
2016-08-01
Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA.
Optimization of a miniaturized DBD plasma chip for mercury detection in water samples.
Abdul-Majeed, Wameath S; Parada, Jaime H Lozano; Zimmerman, William B
2011-11-01
In this work, an optimization study was conducted to investigate the performance of a custom-designed miniaturized dielectric barrier discharge (DBD) microplasma chip to be utilized as a radiation source for mercury determination in water samples. The experimental work was implemented by using experimental design, and the results were assessed by applying statistical techniques. The proposed DBD chip was designed and fabricated in a simple way by using a few microscope glass slides aligned together and held by a Perspex chip holder, which proved useful for miniaturization purposes. Argon gas at 75-180 mL/min was used in the experiments as a discharge gas, while AC power in the range 75-175 W at 38 kHz was supplied to the load from a custom-made power source. A UV-visible spectrometer was used, and the spectroscopic parameters were optimized thoroughly and applied in the later analysis. Plasma characteristics were determined theoretically by analysing the recorded spectroscopic data. The estimated electron temperature (T(e) = 0.849 eV) was found to be higher than the excitation temperature (T(exc) = 0.55 eV) and the rotational temperature (T(rot) = 0.064 eV), which indicates non-thermal plasma is generated in the proposed chip. Mercury cold vapour generation experiments were conducted according to experimental plan by examining four parameters (HCl and SnCl(2) concentrations, argon flow rate, and the applied power) and considering the recorded intensity for the mercury line (253.65 nm) as the objective function. Furthermore, an optimization technique and statistical approaches were applied to investigate the individual and interaction effects of the tested parameters on the system performance. The calculated analytical figures of merit (LOD = 2.8 μg/L and RSD = 3.5%) indicates a reasonable precision system to be adopted as a basis for a miniaturized portable device for mercury detection in water samples.
Noblet, Vincent; Heinrich, Christian; Heitz, Fabrice; Armspach, Jean-Paul
2005-05-01
This paper deals with topology preservation in three-dimensional (3-D) deformable image registration. This work is a nontrivial extension of, which addresses the case of two-dimensional (2-D) topology preserving mappings. In both cases, the deformation map is modeled as a hierarchical displacement field, decomposed on a multiresolution B-spline basis. Topology preservation is enforced by controlling the Jacobian of the transformation. Finding the optimal displacement parameters amounts to solving a constrained optimization problem: The residual energy between the target image and the deformed source image is minimized under constraints on the Jacobian. Unlike the 2-D case, in which simple linear constraints are derived, the 3-D B-spline-based deformable mapping yields a difficult (until now, unsolved) optimization problem. In this paper, we tackle the problem by resorting to interval analysis optimization techniques. Care is taken to keep the computational burden as low as possible. Results on multipatient 3-D MRI registration illustrate the ability of the method to preserve topology on the continuous image domain.
Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir
2017-03-01
Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.
NASA Astrophysics Data System (ADS)
Chu, Jou-Mei
The Fleet Level Environmental Evaluation Tool (FLEET) can assess environmental impacts of various levels of technology and environmental policies on fleet-level carbon emissions and airline operations. FLEET consists of different models to mimic airlines' behaviors and a resource allocation problem to simulate airlines' aircraft deployments on their networks. Additionally, the Multiactors Biofuel Model can conduct biofuel life-cycle assessments and evaluate biofuel developments and assess the effects of new technology on biofuel production costs and unit carbon emissions as well. In addition, the European Union (EU) initiated an Emission Trading Scheme (ETS) in the European Economic Area, while International Civil Aviation Organization (ICAO) is designing a Global Market-Based Measure (GMBM) scheme to limit civil aviation fleet-level carbon emissions after 2021. This work integrates the FLEET and the Multiactors Biofuel Model together to investigate the interactions between airline operations, biofuel production chains, and environmental policies. The interfaces between the two models are bio-refinery firm profit maximization problem and farmers' profits maximization problem. The two maximization problems mimic the bio-refinery firms and farmers behaviors based on environmental policies, airlines performances, and biofuel developments. In the current study, limited impacts of biofuels on fleet-level emissions due to the inconsistency between biofuel demand and feedstock resource distributions and feedstock supplies were observed. Furthermore, the main driving factor for biofuel developments besides newer technologies was distinguished. Conventional jet fuel prices have complex impacts on biofuel developments because conventional jet fuel prices increase biofuel prices and decrease potential biofuel demands at the same time. In the end, with simplified EU ETS and ICAO GMBM models, the integrated tool represents that EU ETS model conducts lower emissions in a short
Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.
Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène
2016-06-01
Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported.
NASA Astrophysics Data System (ADS)
Santaren, D.; Peylin, P.; Bacour, C.; Ciais, P.; Longdoz, B.
2014-12-01
Terrestrial ecosystem models can provide major insights into the responses of Earth's ecosystems to environmental changes and rising levels of atmospheric CO2. To achieve this goal, biosphere models need mechanistic formulations of the processes that drive the ecosystem functioning from diurnal to decadal timescales. However, the subsequent complexity of model equations is associated with unknown or poorly calibrated parameters that limit the accuracy of long-term simulations of carbon or water fluxes and their interannual variations. In this study, we develop a data assimilation framework to constrain the parameters of a mechanistic land surface model (ORCHIDEE) with eddy-covariance observations of CO2 and latent heat fluxes made during the years 2001-2004 at the temperate beech forest site of Hesse, in eastern France. As a first technical issue, we show that for a complex process-based model such as ORCHIDEE with many (28) parameters to be retrieved, a Monte Carlo approach (genetic algorithm, GA) provides more reliable optimal parameter values than a gradient-based minimization algorithm (variational scheme). The GA allows the global minimum to be found more efficiently, whilst the variational scheme often provides values relative to local minima. The ORCHIDEE model is then optimized for each year, and for the whole 2001-2004 period. We first find that a reduced (<10) set of parameters can be tightly constrained by the eddy-covariance observations, with a typical error reduction of 90%. We then show that including contrasted weather regimes (dry in 2003 and wet in 2002) is necessary to optimize a few specific parameters (like the temperature dependence of the photosynthetic activity). Furthermore, we find that parameters inverted from 4 years of flux measurements are successful at enhancing the model fit to the data on several timescales (from monthly to interannual), resulting in a typical modeling efficiency of 92% over the 2001-2004 period (Nash
Silvestre, A M; Petim-Batista, F; Colaço, J
2006-05-01
Daily milk yield over the course of the lactation follows a curvilinear pattern, so a suitable function is required to model this curve. In this study, 7 functions (Wood, Wilmink, Ali and Schaeffer, cubic splines, and 3 Legendre polynomials) were used to model the lactation curve at the phenotypic level, using both daily observations and data from commonly used recording schemes. The number of observations per lactation varied from 4 to 11. Several criteria based on the analysis of the real error were used to compare models. The performance of models showed few discrepancies in the comparison criteria when daily or 4-weekly (with first test at days in milk 8) data by lactation were used. The performance of the Wood, Wilmink, and Ali and Schaeffer models were highly affected by the reduction of the sample dimension. The results of this work support the idea that the performance of these models depends on the sample properties but also shows considerable variation within the sampling groups.
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Brunelli, Davide; Caione, Carlo
2015-01-01
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.
Brunelli, Davide; Caione, Carlo
2015-07-10
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B; Sanz, Crickette; Reed, Patricia; Olson, Sarah H; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-02-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions.
Menikarachchi, Lochana C; Gascón, José A
2008-06-01
This work presents new developments of the moving-domain QM/MM (MoD-QM/MM) method for modeling protein electrostatic potentials. The underlying goal of the method is to map the electronic density of a specific protein configuration into a point-charge distribution. Important modifications of the general strategy of the MoD-QM/MM method involve new partitioning and fitting schemes and the incorporation of dynamic effects via a single-step free energy perturbation approach (FEP). Selection of moderately sized QM domains partitioned between C (alpha) and C (from C=O), with incorporation of delocalization of electrons over neighboring domains, results in a marked improvement of the calculated molecular electrostatic potential (MEP). More importantly, we show that the evaluation of the electrostatic potential can be carried out on a dynamic framework by evaluating the free energy difference between a non-polarized MEP and a polarized MEP. A simplified form of the potassium ion channel protein Gramicidin-A from Bacillus brevis is used as the model system for the calculation of MEP.
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Seze, G.
1991-01-01
Simulated cloud/hole fields as well as Landsat imagery are used in a computer model to evaluate several proposed sampling patterns and shot management schemes for pulsed space-based Doppler lidars. Emphasis is placed on two proposed sampling strategies - one obtained from a conically scanned single telescope and the other from four fixed telescopes that are sequentially used by one laser. The question of whether there are any sampling patterns that maximize the number of resolution areas with vertical soundings to the PBL is addressed.
Severtson, Dustin; Flower, Ken; Nansen, Christian
2016-08-01
The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact.
NASA Astrophysics Data System (ADS)
Chen, Haizhou; Wang, Jiaxu; Li, Junyang; Tang, Baoping
2017-03-01
This paper presents a new scheme for rolling bearing fault diagnosis using texture features extracted from the time-frequency representations (TFRs) of the signal. To derive the proposed texture features, firstly adaptive optimal kernel time frequency representation (AOK-TFR) is applied to extract TFRs of the signal which essentially describe the energy distribution characteristics of the signal over time and frequency domain. Since the AOK-TFR uses the signal-dependent radially Gaussian kernel that adapts over time, it can exactly track the minor variations in the signal and provide an excellent time-frequency concentration in noisy environment. Simulation experiments are furthermore performed in comparison with common time-frequency analysis methods under different noisy conditions. Secondly, the uniform local binary pattern (uLBP), which is a computationally simple and noise-resistant texture analysis method, is used to calculate the histograms from the TFRs to characterize rolling bearing fault information. Finally, the obtained histogram feature vectors are input into the multi-SVM classifier for pattern recognition. We validate the effectiveness of the proposed scheme by several experiments, and comparative results demonstrate that the new fault diagnosis technique performs better than most state-of-the-art techniques, and yet we find that the proposed algorithm possess the adaptivity and noise resistance qualities that could be very useful in real industrial applications.
Baykal, Cenk; Torres, Luis G; Alterovitz, Ron
2015-09-28
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790
Clague, D; Weisgraber, T; Rockway, J; McBride, K
2006-02-12
The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.
A Procedure to Determine the Optimal Sensor Positions for Locating AE Sources in Rock Samples
NASA Astrophysics Data System (ADS)
Duca, S.; Occhiena, C.; Sambuelli, L.
2015-03-01
Within a research work aimed to better understand frost weathering mechanisms of rocks, laboratory tests have been designed to specifically assess a theoretical model of crack propagation due to ice segregation process in water-saturated and thermally microcracked cubic samples of Arolla gneiss. As the formation and growth of microcracks during freezing tests on rock material is accompanied by a sudden release of stored elastic energy, the propagation of elastic waves can be detected, at the laboratory scale, by acoustic emission (AE) sensors. The AE receiver array geometry is a sensitive factor influencing source location errors, for it can greatly amplify the effect of small measurement errors. Despite the large literature on the AE source location, little attention, to our knowledge, has been paid to the description of the experimental design phase. As a consequence, the criteria for sensor positioning are often not declared and not related to location accuracy. In the present paper, a tool for the identification of the optimal sensor position on a cubic shape rock specimen is presented. The optimal receiver configuration is chosen by studying the condition numbers of each of the kernel matrices, used for inverting the arrival time and finding the source location, and obtained for properly selected combinations between sensors and sources positions.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
NASA Astrophysics Data System (ADS)
Santaren, D.; Peylin, P.; Bacour, C.; Ciais, P.; Longdoz, B.
2013-11-01
Terrestrial ecosystem models can provide major insights into the responses of Earth's ecosystems to environmental changes and rising levels of atmospheric CO2. To achieve this goal, biosphere models need mechanistic formulations of the processes that drive the ecosystem functioning from diurnal to decadal time-scales. However, the subsequent complexity of model equations is associated with unknown or poorly calibrated parameters that limit the accuracy of long-term simulations of carbon or water fluxes and their inter-annual variations. In this study, we develop a data assimilation framework to constrain the parameters of a mechanistic land surface model (ORCHIDEE) with eddy-covariance observations of CO2 and latent heat fluxes made during the years 2001-2004 on the temperate beech forest site of Hesse, in eastern France. As a first technical issue, we show that for a complex process-based model such as ORCHIDEE with many (28) parameters to be retrieved, a Monte Carlo approach (genetic algorithm, GA) provides more reliable optimal parameter values than a gradient-based minimization algorithm (variational scheme). The GA allows finding the global minimum more efficiently whilst the variational scheme often provides values relative to local minima. The ORCHIDEE model is then optimized for each year, and for the whole 2001-2004 period. We first find that a reduced (<10) set of parameters can be tightly constrained by the eddy-covariance observations with a typical error reduction of 90%. We then show that including contrasted weather regimes (dry in 2003 and wet in 2002) is necessary to optimize few specific parameters (like the temperature dependence of the photosynthetic activity). Furthermore, we find that parameters inverted from four years of flux measurements are successful at enhancing the model fit to the data at several time-scales (from monthly to interannual) resulting to a typical modeling efficiency of 92% over the 2001-2004 period (Nash
Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki
2016-01-01
Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named “myPresto/omegagene” that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g., the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/. PMID:27924276
Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki
2016-01-01
Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g., the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.
Li, Mu; Rai, Alex J; DeCastro, G Joel; Zeringer, Emily; Barta, Timothy; Magdaleno, Susan; Setterquist, Robert; Vlassov, Alexander V
2015-10-01
Exosomes are RNA and protein-containing nanovesicles secreted by all cell types and found in abundance in body fluids, including blood, urine and cerebrospinal fluid. These vesicles seem to be a perfect source of biomarkers, as their cargo largely reflects the content of parental cells, and exosomes originating from all organs can be obtained from circulation through minimally invasive or non-invasive means. Here we describe an optimized procedure for exosome isolation and analysis using clinical samples, starting from quick and robust extraction of exosomes with Total exosome isolation reagent, then isolation of RNA followed by qRT-PCR. Effectiveness of this workflow is exemplified by analysis of the miRNA content of exosomes derived from serum samples - obtained from the patients with metastatic prostate cancer, treated prostate cancer patients who have undergone prostatectomy, and control patients without prostate cancer. Three promising exosomal microRNA biomarkers were identified, discriminating these groups: hsa-miR375, hsa-miR21, hsa-miR574.
A simple optimized microwave digestion method for multielement monitoring in mussel samples
NASA Astrophysics Data System (ADS)
Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.
2004-04-01
With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
2001-01-01
A closed-loop optimal neural-network controller technique was developed to optimize rotorcraft aeromechanical behaviour. This technique utilities a neural-network scheme to provide a general non-linear model of the rotorcraft. A modem constrained optimisation method is used to determine and update the constants in the neural-network plant model as well as to determine the optimal control vector. Current data is read, weighted, and added to a sliding data window. When the specified maximum number of data sets allowed in the data window is exceeded, the oldest data set is and the remaining data sets are re-weighted. This procedure provides at least four additional degrees-of-freedom in addition to the size and geometry of the neural-network itself with which to optimize the overall operation of the controller. These additional degrees-of-freedom are: 1. the maximum length of the sliding data window, 2. the frequency of neural-network updates, 3. the weighting of the individual data sets within the sliding window, and 4. the maximum number of optimisation iterations used for the neural-network updates.
Gostic, T; Klemenc, S; Stefane, B
2009-05-30
The pyrolysis behaviour of pure cocaine base as well as the influence of various additives was studied using conditions that are relevant to the smoking of illicit cocaine by humans. For this purpose an aerobic pyrolysis device was developed and the experimental conditions were optimized. In the first part of our study the optimization of some basic experimental parameters of the pyrolysis was performed, i.e., the furnace temperature, the sampling start time, the heating period, the sampling time, and the air-flow rate through the system. The second part of the investigation focused on the volatile products formed during the pyrolysis of a pure cocaine free base and mixtures of cocaine base and adulterants. The anaesthetics lidocaine, benzocaine, procaine, the analgesics phenacetine and paracetamol, and the stimulant caffeine were used as the adulterants. Under the applied experimental conditions complete volatilization of the samples was achieved, i.e., the residuals of the studied compounds were not detected in the pyrolysis cell. Volatilization of the pure cocaine base showed that the cocaine recovery available for inhalation (adsorbed on traps) was approximately 76%. GC-MS and NMR analyses of the smoke condensate revealed the presence of some additional cocaine pyrolytic products, such as anhydroecgonine methyl ester (AEME), benzoic acid (BA) and carbomethoxycycloheptatrienes (CMCHTs). Experiments with different cocaine-adulterant mixtures showed that the addition of the adulterants changed the thermal behaviour of the cocaine. The most significant of these was the effect of paracetamol. The total recovery of the cocaine (adsorbed on traps and in a glass tube) from the 1:1 cocaine-paracetamol mixture was found to be only 3.0+/-0.8%, versus 81.4+/-2.9% for the pure cocaine base. The other adulterants showed less-extensive effects on the recovery of cocaine, but the pyrolysis of the cocaine-procaine mixture led to the formation of some unique pyrolytic products
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors
Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.
2013-01-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.
NASA Astrophysics Data System (ADS)
Mönkölä, Sanna
2013-06-01
This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart
2010-12-01
A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer.
NASA Astrophysics Data System (ADS)
Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang
2010-05-01
Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named “Phased Searching with NEAT in a Time-Scaled Framework” was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers – SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework – are performing comparably well in our mammographic mass detection scheme. PMID:25392680
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named "Phased Searching with NEAT in a Time-Scaled Framework" was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers - SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework - are performing comparably well in our mammographic mass detection scheme.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K
2016-01-01
Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions.
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
probability in design optimization problems. The buffered failure probability is more conservative and possesses properties that make it more...The buffered failure probability is more conservative and possesses properties that make it more convenient to compute and optimize. Since a failure
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Crook, D J; Fierke, M K; Mauromoustakos, A; Kinney, D L; Stephen, F M
2007-06-01
In the Ozark Mountains of northern Arkansas and southern Missouri, an oak decline event, coupled with epidemic populations of red oak borer (Enaphalodes rufulus Haldeman), has resulted in extensive red oak (Quercus spp., section Lobatae) mortality. Twenty-four northern red oak trees, Quercus rubra L., infested with red oak borer, were felled in the Ozark National Forest between March 2002 and June 2003. Infested tree boles were cut into 0.5-m sample bolts, and the following red oak borer population variables were measured: current generation galleries, live red oak borer, emergence holes, and previous generation galleries. Population density estimates from sampling plans using varying numbers of samples taken randomly and systematically were compared with total census measurements for the entire infested tree bole. Systematic sampling consistently yielded lower percent root mean square error (%RMSE) than random sampling. Systematic sampling of one half of the tree (every other 0.5-m sample along the tree bole) yielded the lowest values. Estimates from plans systematically sampling one half the tree and systematic proportional sampling using seven or nine samples did not differ significantly from each other and were within 25% RMSE of the "true" mean. Thus, we recommend systematically removing and dissecting seven 0.5-m samples from infested trees as an optimal sampling plan for monitoring red oak borer within-tree population densities. This optimal sampling plan should allow for collection of acceptably accurate within-tree population density data for this native wood-boring insect and reducing labor and costs of dissecting whole trees.
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-11-25
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD).
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Pooler, P.S.; Smith, D.R.
2005-01-01
We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.
Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y
2011-12-01
There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample.
NASA Astrophysics Data System (ADS)
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
Technology Transfer Automated Retrieval System (TEKTRAN)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Selecting registration schemes in case of interstitial lung disease follow-up in CT
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena
2015-08-15
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania
2006-01-01
Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Aubier, Thomas G; Sherratt, Thomas N
2015-11-01
The convergent evolution of warning signals in unpalatable species, known as Müllerian mimicry, has been observed in a wide variety of taxonomic groups. This form of mimicry is generally thought to have arisen as a consequence of local frequency-dependent selection imposed by sampling predators. However, despite clear evidence for local selection against rare warning signals, there appears an almost embarrassing amount of polymorphism in natural warning colors, both within and among populations. Because the model of predator cognition widely invoked to explain Müllerian mimicry (Müller's "fixed n(k)" model) is highly simplified and has not been empirically supported; here, we explore the dynamical consequences of the optimal strategy for sampling unfamiliar prey. This strategy, based on a classical exploration-exploitation trade-off, not only allows for a variable number of prey sampled, but also accounts for predator neophobia under some conditions. In contrast to Müller's "fixed n(k)" sampling rule, the optimal sampling strategy is capable of generating a variety of dynamical outcomes, including mimicry but also regional and local polymorphism. Moreover, the heterogeneity of predator behavior across space and time that a more nuanced foraging strategy allows, can even further facilitate the emergence of both local and regional polymorphism in prey warning color.
Sorzano, Carlos Oscar S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar
2015-06-01
Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies.
Siqueira, Glécio Machado; Dafonte, Jorge Dafonte; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; Silva, Ênio Farias França e
2014-01-01
This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from −0.36 to −0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging. PMID:25610899
Machado Siqueira, Glécio; Dafonte Dafonte, Jorge; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; França e Silva, Ênio Farias
2014-01-01
This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from -0.36 to -0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging.
O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A
2014-10-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring.
OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER
The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...
Optimization of sample preparation for accurate results in quantitative NMR spectroscopy
NASA Astrophysics Data System (ADS)
Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi
2017-04-01
Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Prévot, V; Tweepenninckx, F; Van Nerom, E; Linden, A; Content, J; Kimpe, A
2007-01-01
Botulism is a rare but serious paralytic illness caused by a nerve toxin that is produced by the bacterium Clostridium botulinum. The economic, medical and alimentary consequences can be catastrophic in case of an epizooty. A polymerase chain reaction (PCR)-based assay was developed for the detection of C. botulinum toxigenic strains type C and D in bovine samples. This assay has proved to be less expensive, faster and simpler to use than the mouse bioassay, the current reference method for diagnosis of C. botulinum toxigenic strains. Three pairs of primers were designed, one for global detection of C. botulinum types C and D (primer pair Y), and two strain-specific pairs specifically designed for types C (primer pair VC) and D (primer pair VD). The PCR amplification conditions were optimized and evaluated on 13 bovine and two duck samples that had been previously tested by the mouse bioassay. In order to assess the impact of sample treatment, both DNA extracted from crude samples and three different enrichment broths (TYG, CMM, CMM followed by TYG) were tested. A 100% sensitivity was observed when samples were enriched for 5 days in CMM followed by 1 day in TYG broth. False-negative results were encountered when C. botulinum was screened for in crude samples. These findings indicate that the current PCR is a reliable method for the detection of C. botulinum toxigenic strains type C and D in bovine samples but only after proper enrichment in CMM and TYG broth.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Dynamically optimized Wang-Landau sampling with adaptive trial moves and modification factors.
Koh, Yang Wei; Lee, Hwee Kuan; Okabe, Yutaka
2013-11-01
The density of states of continuous models is known to span many orders of magnitudes at different energies due to the small volume of phase space near the ground state. Consequently, the traditional Wang-Landau sampling which uses the same trial move for all energies faces difficulties sampling the low-entropic states. We developed an adaptive variant of the Wang-Landau algorithm that very effectively samples the density of states of continuous models across the entire energy range. By extending the acceptance ratio method of Bouzida, Kumar, and Swendsen such that the step size of the trial move and acceptance rate are adapted in an energy-dependent fashion, the random walker efficiently adapts its sampling according to the local phase space structure. The Wang-Landau modification factor is also made energy dependent in accordance with the step size, enhancing the accumulation of the density of states. Numerical simulations show that our proposed method performs much better than the traditional Wang-Landau sampling.
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats
Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579
Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.
Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.
Fillers, W Steven
2004-12-01
Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system.
Trojanowski, S.; Ciszek, M.
2009-10-15
In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.
Trojanowski, S; Ciszek, M
2009-10-01
In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.
Alizadeh, Taher; Ganjali, Mohammad Reza; Nourozi, Parviz; Zare, Mashaalah
2009-04-13
In this work a parathion selective molecularly imprinted polymer was synthesized and applied as a high selective adsorber material for parathion extraction and determination in aqueous samples. The method was based on the sorption of parathion in the MIP according to simple batch procedure, followed by desorption by using methanol and measurement with square wave voltammetry. Plackett-Burman and Box-Behnken designs were used for optimizing the solid-phase extraction, in order to enhance the recovery percent and improve the pre-concentration factor. By using the screening design, the effect of six various factors on the extraction recovery was investigated. These factors were: pH, stirring rate (rpm), sample volume (V(1)), eluent volume (V(2)), organic solvent content of the sample (org%) and extraction time (t). The response surface design was carried out considering three main factors of (V(2)), (V(1)) and (org%) which were found to be main effects. The mathematical model for the recovery percent was obtained as a function of the mentioned main effects. Finally the main effects were adjusted according to the defined desirability function. It was found that the recovery percents more than 95% could be easily obtained by using the optimized method. By using the experimental conditions, obtained in the optimization step, the method allowed parathion selective determination in the linear dynamic range of 0.20-467.4 microg L(-1), with detection limit of 49.0 ng L(-1) and R.S.D. of 5.7% (n=5). Parathion content of water samples were successfully analyzed when evaluating potentialities of the developed procedure.
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-12-19
The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1976-01-01
Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.
Optimal Sampling Efficiency in Monte Carlo Simulation With an Approximate Potential
2009-02-01
Boltzmann sampling of an approximate potential (the “reference” system) is used to build a Markov chain in the isothermal - isobaric ensemble. At the end...in the isothermal - isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level the “full” system and a...1pn. 7 In the isothermal - isobaric ensemble,30 for which the corre- sponding potential is the Gibbs free energy,31 Wi = − Ui + PVi + N ln Vi
Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-04-28
Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan
2014-05-01
Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method
Vinks, A A; Mouton, J W; Touw, D J; Heijerman, H G; Danhof, M; Bakker, W
1996-01-01
Postinfusion data obtained from 17 patients with cystic fibrosis participating in two clinical trials were used to develop population models for ceftazidime pharmacokinetics during continuous infusion. Determinant (D)-optimal sampling strategy (OSS) was used to evaluate the benefits of merging four maximally informative sampling times with population modeling. Full and sparse D-optimal sampling data sets were analyzed with the nonparametric expectation maximization (NPEM) algorithm and compared with the model obtained by the traditional standard two-stage approach. Individual pharmacokinetic parameter estimates were calculated by weighted nonlinear least-squares regression and by maximum a posteriori probability Bayesian estimator. Individual parameter estimates obtained with four D-optimally timed serum samples (OSS4) showed excellent correlation with parameter estimates obtained by using full data sets. The parameters of interest, clearance and volume of distribution, showed excellent agreement (R2 = 0.89 and R2 = 0.86). The ceftazidime population models were described as two-compartment kslope models, relating elimination constants to renal function. The NPEM-OSS4 model was described by the equations kel = 0.06516+ (0.00708.CLCR) and V1 = 0.1773 +/- 0.0406 liter/kg where CLCR is creatinine clearance in milliliters per minute per 1.73 m2, V1 is the volume of distribution of the central compartment, and kel is the elimination rate constant. Predictive performance evaluation for 31 patients with data which were not part of the model data sets showed that the NPEM-ALL model performed best, with significantly better precision than that of the standard two-stage model (P < 0.001). Predictions with the NPEM-OSS4 model were as precise as those with the NPEM-ALL model but slightly biased (-2.2 mg/liter; P < 0.01). D-optimal monitoring strategies coupled with population modeling results in useful and cost-effective population models and will be of advantage in clinical
NASA Astrophysics Data System (ADS)
Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.
2008-12-01
In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.
Csikai, J; Dóczi, R
2009-01-01
The advantages and limitations of epithermal neutrons in qualification of hydrocarbons via their H contents and C/H atomic ratios have been investigated systematically. Sensitivity of this method and the dimensions of the interrogated regions were determined for various types of hydrogenous samples. Results clearly demonstrate the advantages of direct neutron detection, e.g. by BF(3) counters as compared to the foil activation method in addition to using the hardness of the spectral shape of Pu-Be neutrons to that from a (252)Cf source.
Optimization of clamped beam geometry for fracture toughness testing of micron-scale samples
NASA Astrophysics Data System (ADS)
Nagamani Jaya, B.; Bhowmick, Sanjit; Syed Asif, S. A.; Warren, Oden L.; Jayaram, Vikram
2015-06-01
Fracture toughness measurements at the small scale have gained prominence over the years due to the continuing miniaturization of structural systems. Measurements carried out on bulk materials cannot be extrapolated to smaller length scales either due to the complexity of the microstructure or due to the size and geometric effect. Many new geometries have been proposed for fracture property measurements at small-length scales depending on the material behaviour and the type of device used in service. In situ testing provides the necessary environment to observe fracture at these length scales so as to determine the actual failure mechanism in these systems. In this paper, several improvements are incorporated to a previously proposed geometry of bending a doubly clamped beam for fracture toughness measurements. Both monotonic and cyclic loading conditions have been imposed on the beam to study R-curve and fatigue effects. In addition to the advantages that in situ SEM-based testing offers in such tests, FEM has been used as a simulation tool to replace cumbersome and expensive experiments to optimize the geometry. A description of all the improvements made to this specific geometry of clamped beam bending to make a variety of fracture property measurements is given in this paper.
Optimal media for use in air sampling to detect cultivable bacteria and fungi in the pharmacy.
Weissfeld, Alice S; Joseph, Riya Augustin; Le, Theresa V; Trevino, Ernest A; Schaeffer, M Frances; Vance, Paula H
2013-10-01
Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl.
Minică, Camelia C; Genovese, Giulio; Hultman, Christina M; Pool, René; Vink, Jacqueline M; Neale, Michael C; Dolan, Conor V; Neale, Benjamin M
2017-04-01
Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. The two tests have equal power, if the weights in the set included weights resembling the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia - the PRRC2A (p = 1.020e-06) and the VARS2 (p = 2.383e-06) - in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances, the score test is the most powerful test. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches.
Design Of A Sorbent/desorbent Unit For Sample Pre-treatment Optimized For QMB Gas Sensors
Pennazza, G.; Cristina, S.; Santonico, M.; Martinelli, E.; Di Natale, C.; D'Amico, A.; Paolesse, R.
2009-05-23
Sample pre-treatment is a typical procedure in analytical chemistry aimed at improving the performance of analytical systems. In case of gas sensors sample pre-treatment systems are devised to overcome sensors limitations in terms of selectivity and sensitivity. For this purpose, systems based on adsorption and desorption processes driven by temperature conditioning have been illustrated. The involvement of large temperature ranges may pose problems when QMB gas sensors are used. In this work a study of such influences on the overall sensing properties of QMB sensors are illustrated. The results allowed the design of a pre-treatment unit coupled with a QMB gas sensors array optimized to operate in a suitable temperatures range. The performance of the system are illustrated by the partially separation of water vapor in a gas mixture, and by substantial improvement of the signal to noise ratio.
Amaro, Rosa; Murillo, Miguel; González, Zurima; Escalona, Andrés; Hernández, Luís
2009-01-01
The treatment of wheat samples was optimized before the determination of phytic acid by high-performance liquid chromatography with refractive index detection. Drying by lyophilization and oven drying were studied; drying by lyophilization gave better results, confirming that this step is critical in preventing significant loss of analyte. In the extraction step, washing of the residue and collection of this water before retention of the phytates in the NH2 Sep-Pak cartridge were important. The retention of phytates in the NH2 Sep-Pak cartridge and elimination of the HCI did not produce significant loss (P = 0.05) in the phytic acid content of the sample. Recoveries of phytic acid averaged 91%, which is a substantial improvement with respect to values reported by others using this methodology.
Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M
2017-03-08
Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0-2 points/question. A combinations algorithm was developed to assess street segments' representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score(®), a validated neighborhood walkability measure. Street segment quality scores ranged 10-47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172-475 (Mean = 352.3 ± 63.6). Walk scores(®) ranged 0-91 (Mean = 46.7 ± 26.3). Street segment combinations' correlation coefficients ranged 0.75-1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores(®) (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.
Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.
2017-01-01
Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes. PMID:28282878
Northern Arabian Sea Circulation - Autonomous Research: Optimal Planning Systems (NASCar-OPS)
2015-09-30
Our long-term goal is to apply our theory and schemes for rigorous optimal path planning and persistent ocean sampling with swarms of autonomous...Autonomous Vehicles for Persistent Sampling. We plan to apply our theory and schemes for rigorous optimal path planning and persistent ocean sampling...pierrel@mit.edu Grant Number: N00014-15-1-2616 04/25/2015 - 09/30/2018 http://mseas.mit.edu/Research/NASCar-OPS/index.html LONG-TERM GOALS
Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R
2008-08-01
The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.
Chen, DI-WEN
2001-11-21
Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information
Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911
Zhumadilov, Kassym; Ivannikov, Alexander; Skvortsov, Valeriy; Stepanenko, Valeriy; Zhumadilov, Zhaxybay; Endo, Satoru; Tanaka, Kenichi; Hoshi, Masaharu
2005-12-01
In order to improve the accuracy of the tooth enamel EPR dosimetry method, EPR spectra recording conditions were optimized. The uncertainty of dose determination was obtained as the mean square deviation of doses, determined with the use of a spectra deconvolution program, from the nominal doses for ten enamel samples irradiated in the range from 0 to 500 mGy. The spectra were recorded at different microwave powers and accumulation times. It was shown that minimal uncertainty is achieved at the microwave power of about 2 mW for a used spectrometer JEOL JES-FA100. It was found that a limit of the accumulation time exists beyond which uncertainty reduction is ineffective. At an established total time of measurement, reduced uncertainty is obtained by averaging the experimental doses determined from recorded spectra following intermittent sample shaking and sample tube rotation, rather than from one spectrum recorded at longer accumulation time. The effect of sample mass on the spectrometer's sensitivity was investigated in order to find out how to make appropriate corrections.
Smiley Evans, Tierra; Barry, Peter A; Gilardi, Kirsten V; Goldstein, Tracey; Deere, Jesse D; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R; Wolking, David; Smith, Brett; Mazet, Jonna A K; Johnson, Christine K
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible.
Sanz, C; Ansorena, D; Bello, J; Cid, C
2001-03-01
Equilibration time and temperature were the factors studied to choose the best conditions for analyzing volatiles in roasted ground Arabica coffee by a static headspace sampling extraction method. Three temperatures of equilibration were studied: 60, 80, and 90 degrees C. A larger quantity of volatile compounds was extracted at 90 degrees C than at 80 or 60 degrees C, although the same qualitative profile was found for each. The extraction of the volatile compounds was studied at seven different equilibration times: 30, 45, 60, 80, 100, 120, and 150 min. The best time of equilibration for headspace analysis of roasted ground Arabica coffee should be selected depending on the chemical class or compound studied. One hundred and twenty-two volatile compounds were identified, including 26 furans, 20 ketones, 20 pyrazines, 9 alcohols, 9 aldehydes, 8 esters, 6 pyrroles, 6 thiophenes, 4 sulfur compounds, 3 benzenic compounds, 2 phenolic compounds, 2 pyridines, 2 thiazoles, 1 oxazole, 1 lactone, 1 alkane, 1 alkene, and 1 acid.
NASA Astrophysics Data System (ADS)
Taniai, G.; Oda, H.; Kurihara, M.; Hashimoto, S.
2010-12-01
Halogenated volatile organic compounds (HVOCs) produced in the marine environment are thought to play a key role in atmospheric reactions, particularly those involved in the global radiation budget and the depression of tropospheric and stratospheric ozone. To evaluate HVOCs concentrations in the various natural samples, we developed an automated dynamic headspace extraction method for the determination of 15 HVOCs, such as chloromethane, bromomethane, bromoethane, iodomethane, iodoethane, bromochloromethane, 1-iodopropane, 2-iodopropane, dibromomethane, bromodichloromethane, chloroiodomethane, chlorodibromomethane, bromoiodomethane, tribromomethane, and diiodomethane. Dynamic headspace system (GERSTEL DHS) was used to purge the gas phase above samples and to trap HVOCs on the adsorbent column from the purge gas. We measured the HVOCs concentrations in the adsorbent column with gas chromatograph (Agilent 6890N)- mass spectrometer (Agilent 5975C). In dynamic headspace system, an glass tube containing Tenax TA or Tenax GR was used as adsorbent column for the collection of 15 HVOCs. The parameters for purge and trap extraction, such as purge flow rate (ml/min), purge volume (ml), incubation time (min), and agitator speed (rpm), were optimized. The detection limits of HVOCs in water samples were 1270 pM (chloromethane), 103 pM (bromomethane), 42.1 pM (iodomethane), and 1.4 to 10.2 pM (other HVOCs). The repeatability (relative standard deviation) for 15 HVOCs were < 9 % except chloromethane (16.2 %) and bromomethane (11.0 %). On the basis of the measurements for various samples, we concluded that this analytical method is useful for the determination of wide range of HVOCs with boiling points between - 24°C (chloromethane) and 181°C (diiodomethane) for the liquid or viscous samples.
Eblé, P L; Orsel, K; van Hemert-Kluitenberg, F; Dekker, A
2015-05-15
We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2 pen-mates. Viral excretion was detected for a long period (>21 days post-inoculation, dpi). Transmission of FMDV occurred in the non-vaccinated groups (R0=1.14) but only in the first week after infection, when virus shedding was highest. In the vaccinated groups no transmission occurred (Rv<1, p=0.013). The viral excretion of the vaccinated sheep and the viral load in their pens was significantly lower than that of the non-vaccinated sheep. FMDV could be detected in plasma samples from 12 of 17 infected non-vaccinated sheep, for an average of 2.1 days, but in none of the 10 infected vaccinated sheep. In contrast, FMDV could readily be isolated from mouth swab samples from both non-vaccinated and vaccinated infected sheep starting at 1-3 dpi and in 16 of 27 infected sheep up till 21 dpi. Serologically, after 3-4 weeks, all but one of the infected sheep were detected using the NS-ELISA. We conclude that vaccination of a sheep population would likely stop an epidemic of FMDV and that the use of mouth swab samples would be a good alternative (instead of using vesicular lesions or blood samples) to detect an FMD infection in a sheep population both early and prolonged after infection.
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Wang, Man-Juing; Tsai, Chih-Hsin; Hsu, Wei-Ya; Liu, Ju-Tsung; Lin, Cheng-Huang
2009-02-01
The optimal separation conditions and online sample concentration for N,N-dimethyltryptamine (DMT) and related compounds, including alpha-methyltryptamine (AMT), 5-methoxy-AMT (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DiPT), 5-methoxy-DMT (5-MeO-DMT), and 5-methoxy-N,N-DiPT (5-MeO-DiPT), using micellar EKC (MEKC) with UV-absorbance detection are described. The LODs (S/N = 3) for MEKC ranged from 1.0 1.8 microg/mL. Use of online sample concentration methods, including sweeping-MEKC and cation-selective exhaustive injection-sweep-MEKC (CSEI-sweep-MEKC) improved the LODs to 2.2 8.0 ng/mL and 1.3 2.7 ng/mL, respectively. In addition, the order of migration of the nine tryptamines was investigated. A urine sample, obtained by spiking urine collected from a human volunteer with DMT, was also successfully examined.
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-01-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e. equivalent to ~ 8 ng of amino sugar carbon. Our results obtained from δ13C analysis of amino sugars in selected marine sediment samples showed that muramic acid had isotopic imprints from indigenous bacterial activities, whereas glucosamine and galactosamine were mainly derived from organic detritus. The analysis of stable carbon isotopic compositions of amino sugars opens a promising window for the investigation of microbial metabolisms in marine sediments and the deep marine biosphere.
Yu, Yuqi; Wang, Jinan; Shao, Qiang E-mail: Jiye.Shi@ucb.com Zhu, Weiliang E-mail: Jiye.Shi@ucb.com; Shi, Jiye E-mail: Jiye.Shi@ucb.com
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles
2014-01-15
The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution.
Lee, Jae Hwan; Jia, Chunrong; Kim, Yong Doo; Kim, Hong Hyun; Pham, Tien Thang; Choi, Young Seok; Seo, Young Un; Lee, Ike Woo
2012-01-01
Trimethylsilanol (TMSOH) can cause damage to surfaces of scanner lenses in the semiconductor industry, and there is a critical need to measure and control airborne TMSOH concentrations. This study develops a thermal desorption (TD)-gas chromatography (GC)-mass spectrometry (MS) method for measuring trace-level TMSOH in occupational indoor air. Laboratory method optimization obtained best performance when using dual-bed tube configuration (100 mg of Tenax TA followed by 100 mg of Carboxen 569), n-decane as a solvent, and a TD temperature of 300°C. The optimized method demonstrated high recovery (87%), satisfactory precision (<15% for spiked amounts exceeding 1 ng), good linearity (R2 = 0.9999), a wide dynamic mass range (up to 500 ng), low method detection limit (2.8 ng m−3 for a 20-L sample), and negligible losses for 3-4-day storage. The field study showed performance comparable to that in laboratory and yielded first measurements of TMSOH, ranging from 1.02 to 27.30 μg/m3, in the semiconductor industry. We suggested future development of real-time monitoring techniques for TMSOH and other siloxanes for better maintenance and control of scanner lens in semiconductor wafer manufacturing. PMID:22966229
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2015-12-01
Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.
Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.
Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J
2016-01-01
The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.
Beringer, Paul; Aminimanizani, Amir; Synold, Timothy; Scott, Christy
2002-04-01
High-dose ibuprofen therapy has demonstrated to slow deterioration in pulmonary function in children with cystic fibrosis with mild lung disease. Therapeutic drug monitoring has been recommended to maintain peak concentrations within the range of 50 to 100 mg/L to ensure efficacy. Current methods for dosage individualization are based on dose proportionality using visual inspection of the peak concentration; however, because of interpatient variability in the absorption of the various formulations this method may result in incorrect assessments of the peak concentration achieved. Maximum a posteriori Bayesian analysis (MAP-B) has proven to be a useful and precise method of individualizing the dose of aminoglycosides but requires a description of the structural model. In this study we performed parametric population modeling analysis on plasma concentrations of ibuprofen after single doses of 20 to 30-mg/kg tablet or suspension in children with cystic fibrosis. Patients evaluated in this study were part of a single dose pharmacokinetic study that has been published previously. A one-compartment model with first order absorption and a lag time best described the data. The pharmacokinetic parameters differed significantly depending on the formulation administered. D-optimal sampling times for the suspension and tablet formulations are 0, 0.25 to 0.5, 1, and 3 to 4 hours and 0, 0.25 to 0.5, 1 to 1.5, and 5 hours respectively. Use of MAP-B analysis performed with the 4 d-optimal sampling strategy resulted in accurate and precise estimates of the pharmacokinetic parameters when compared with maximum likelihood analysis using the complete plasma concentrations data set. Further studies are needed to evaluate the performance of these models and the impact on patient outcomes.
Cirugeda-Roldán, E M; Cuesta-Frau, D; Miró-Martínez, P; Oltra-Crespo, S; Vigil-Medina, L; Varela-Entrecanales, M
2014-05-01
This paper describes a new method to optimize the computation of the quadratic sample entropy (QSE) metric. The objective is to enhance its segmentation capability between pathological and healthy subjects for short and unevenly sampled biomedical records, like those obtained using ambulatory blood pressure monitoring (ABPM). In ABPM, blood pressure is measured every 20-30 min during 24h while patients undergo normal daily activities. ABPM is indicated for a number of applications such as white-coat, suspected, borderline, or masked hypertension. Hypertension is a very important clinical issue that can lead to serious health implications, and therefore its identification and characterization is of paramount importance. Nonlinear processing of signals by means of entropy calculation algorithms has been used in many medical applications to distinguish among signal classes. However, most of these methods do not perform well if the records are not long enough and/or not uniformly sampled. That is the case for ABPM records. These signals are extremely short and scattered with outliers or missing/resampled data. This is why ABPM Blood pressure signal screening using nonlinear methods is a quite unexplored field. We propose an additional stage for the computation of QSE independently of its parameter r and the input signal length. This enabled us to apply a segmentation process to ABPM records successfully. The experimental dataset consisted of 61 blood pressure data records of control and pathological subjects with only 52 samples per time series. The entropy estimation values obtained led to the segmentation of the two groups, while other standard nonlinear methods failed.
Peuchen, Elizabeth H; Sun, Liangliang; Dovichi, Norman J
2016-07-01
Xenopus laevis is an important model organism in developmental biology. While there is a large literature on changes in the organism's transcriptome during development, the study of its proteome is at an embryonic state. Several papers have been published recently that characterize the proteome of X. laevis eggs and early-stage embryos; however, proteomic sample preparation optimizations have not been reported. Sample preparation is challenging because a large fraction (~90 % by weight) of the egg or early-stage embryo is yolk. We compared three common protein extraction buffer systems, mammalian Cell-PE LB(TM) lysing buffer (NP40), sodium dodecyl sulfate (SDS), and 8 M urea, in terms of protein extraction efficiency and protein identifications. SDS extracts contained the highest concentration of proteins, but this extract was dominated by a high concentration of yolk proteins. In contrast, NP40 extracts contained ~30 % of the protein concentration as SDS extracts, but excelled in discriminating against yolk proteins, which resulted in more protein and peptide identifications. We then compared digestion methods using both SDS and NP40 extraction methods with one-dimensional reverse-phase liquid chromatography-tandem mass spectrometry (RPLC-MS/MS). NP40 coupled to a filter-aided sample preparation (FASP) procedure produced nearly twice the number of protein and peptide identifications compared to alternatives. When NP40-FASP samples were subjected to two-dimensional RPLC-ESI-MS/MS, a total of 5171 proteins and 38,885 peptides were identified from a single stage of embryos (stage 2), increasing the number of protein identifications by 23 % in comparison to other traditional protein extraction methods.
NASA Astrophysics Data System (ADS)
Robert, D.; Braud, I.; Cohard, J.; Zin, I.; Vauclin, M.
2010-12-01
Physically based hydrological models involve a large amount of parameters and data. Any of them is associated with uncertainties because of indirect measurements of some characteristics or because of spatial or temporal variability of others, …. Then, even if lots of data are measured in the field or in the laboratory, ignorance and uncertainty about data persist and a large degree of freedom remains for modeling. Moreover the choice for physical parameterization also induces uncertainties and errors in model behavior and simulation results. To address this problem, sensitivity analyses are useful. They allow the determination of the influence of each parameter on modeling results and allow the adjustment of an optimal parameter set by minimizing a cost function. However, the larger the number of parameters, the more expensive the computational costs to explore the whole parameter space. In this context, we carried out an original approach in the hydrology domain to perform this sensitivity analysis using a 1D Soil - Vegetation - Atmosphere Transfer model. The chosen method is a global method. It focuses on the output data variability due to the input parameter uncertainties. The latin hypercube sampling is adopted to sample the analyzed input parameter space. This method has the advantage to reduce the computational cost. The method is applied using the SiSPAT (Simple Soil Vegetation Atmosphere Transfer) model over a complete year period with observations collected in a small catchments in Benin, within the AMMA project. It involves sensitivity to 30 parameters sampled in 40 intervals. The quality of the modeled results is evaluated by calculating several criteria: the bias, the root mean square error and the Nash-Sutcliffe efficiency coefficient between modeled and observed time series of net radiation, heat fluxes, soil temperatures and volumetric water contents.... To hierarchize the influence of the various input parameters on the results, the study of
Fakanya, Wellington M.; Tothill, Ibtisam E.
2014-01-01
The development of an electrochemical immunosensor for the biomarker, C-reactive protein (CRP), is reported in this work. CRP has been used to assess inflammation and is also used in a multi-biomarker system as a predictive biomarker for cardiovascular disease risk. A gold-based working electrode sensor was developed, and the types of electrode printing inks and ink curing techniques were then optimized. The electrodes with the best performance parameters were then employed for the construction of an immunosensor for CRP by immobilizing anti-human CRP antibody on the working electrode surface. A sandwich enzyme-linked immunosorbent assay (ELISA) was then constructed after sample addition by using anti-human CRP antibody labelled with horseradish peroxidase (HRP). The signal was generated by the addition of a mediator/substrate system comprised of 3,3,5',5'-Tetramethylbenzidine dihydrochloride (TMB) and hydrogen peroxide (H2O2). Measurements were conducted using chronoamperometry at −200 mV against an integrated Ag/AgCl reference electrode. A CRP limit of detection (LOD) of 2.2 ng·mL−1 was achieved in spiked serum samples, and performance agreement was obtained with reference to a commercial ELISA kit. The developed CRP immunosensor was able to detect a diagnostically relevant range of the biomarker in serum without the need for signal amplification using nanoparticles, paving the way for future development on a cardiac panel electrochemical point-of-care diagnostic device. PMID:25587427
Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon
2016-10-01
Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR.
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-09-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.
Amorim, Fábio Alan Carqueija; Costa, Vinicius Câmara; Silva, Erik Galvão P da; Lima, Daniel de Castro; Jesus, Raildo Mota de; Bezerra, Marcos de Almeida
2017-07-15
A slurry sampling procedure has been developed for Fe and Mg determination in cassava starch using flame atomic absorption spectrometry. The optimization step was performed using a univariate methodology for 200mg samples and a multivariate methodology, using the Box-Behnken design, for other variables, such as solvent (HNO3:HCl), final concentration (1.7molL(-1)) and time (26min). This procedure allowed determination of iron and magnesium with detection limits of 1.01 and 3.36mgkg(-1), respectively. Precision, expressed as relative standard deviation (%RSD), was of 5.8 and 4.1% (n=10) for Fe (17.8mgkg(-1)) and Mg (64.5mgkg(-1)), respectively. Accuracy was confirmed by analysis of a standard reference material for wheat flour (NIST 1567a), which had certified concentrations of 14.1±0.5mgkg(-1) for Fe and 40±2.0mgkg(-1) for Mg, and the concentrations found using proposed method were 13.7±0.3mgkg(-1) for Fe and 40.8±1.5mgkg(-1) for Mg. Comparison with concentrations obtained using closed vessel microwave digestion was also realized. The concentrations obtained varied between 7.85 and 17.8mgkg(-1) for Fe and 23.7-64.5mgkg(-1), for Mg. The simplicity, easily, speed and satisfactory analytical characteristics indicate that the proposed analytical procedure is a good alternative for the determination of Fe and Mg in cassava starch samples.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS(2) experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10(5) in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Sharma, M; Todor, D; Fields, E
2014-06-01
Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.
Abdulra'uf, Lukman Bola; Sirhan, Ala Yahya; Tan, Guan Huat
2015-01-01
Sample preparation has been identified as the most important step in analytical chemistry and has been tagged as the bottleneck of analytical methodology. The current trend is aimed at developing cost-effective, miniaturized, simplified, and environmentally friendly sample preparation techniques. The fundamentals and applications of multivariate statistical techniques for the optimization of microextraction sample preparation and chromatographic analysis of pesticide residues are described in this review. The use of Placket-Burman, Doehlert matrix, and Box-Behnken designs are discussed. As observed in this review, a number of analytical chemists have combined chemometrics and microextraction techniques, which has helped to streamline sample preparation and improve sample throughput.
NASA Astrophysics Data System (ADS)
Guarieiro, Lílian Lefol Nani; Pereira, Pedro Afonso de Paula; Torres, Ednildo Andrade; da Rocha, Gisele Olimpio; de Andrade, Jailson B.
Biodiesel is emerging as a renewable fuel, hence becoming a promising alternative to fossil fuels. Biodiesel can form blends with diesel in any ratio, and thus could replace partially, or even totally, diesel fuel in diesel engines what would bring a number of environmental, economical and social advantages. Although a number of studies are available on regulated substances, there is a gap of studies on unregulated substances, such as carbonyl compounds, emitted during the combustion of biodiesel, biodiesel-diesel and/or ethanol-biodiesel-diesel blends. CC is a class of hazardous pollutants known to be participating in photochemical smog formation. In this work a comparison was carried out between the two most widely used CC collection methods: C18 cartridges coated with an acid solution of 2,4-dinitrophenylhydrazine (2,4-DNPH) and impinger bottles filled in 2,4-DNPH solution. Sampling optimization was performed using a 2 2 factorial design tool. Samples were collected from the exhaust emissions of a diesel engine with biodiesel and operated by a steady-state dynamometer. In the central body of factorial design, the average of the sum of CC concentrations collected using impingers was 33.2 ppmV but it was only 6.5 ppmV for C18 cartridges. In addition, the relative standard deviation (RSD) was 4% for impingers and 37% for C18 cartridges. Clearly, the impinger system is able to collect CC more efficiently, with lower error than the C18 cartridge system. Furthermore, propionaldehyde was nearly not sampled by C18 system at all. For these reasons, the impinger system was chosen in our study. The optimized sampling conditions applied throughout this study were: two serially connected impingers each containing 10 mL of 2,4-DNPH solution at a flow rate of 0.2 L min -1 during 5 min. A profile study of the C1-C4 vapor-phase carbonyl compound emissions was obtained from exhaust of pure diesel (B0), pure biodiesel (B100) and biodiesel-diesel mixtures (B2, B5, B10, B20, B50, B
Multidimensional explicit difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Van Leer, B.
1984-01-01
First- and second-order explicit difference schemes are derived for a three-dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind biased and optimally stable.
Multidimensional explicit difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Vanleer, B.
1983-01-01
First and second order explicit difference schemes are derived for a three dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind (backward) biased and optimally stable.
Vollmer, Tanja; Schottstedt, Volkmar; Bux, Juergen; Walther-Wenke, Gabriele; Knabbe, Cornelius; Dreier, Jens
2014-01-01
Background There is growing concern on the residual risk of bacterial contamination of platelet concentrates in Germany, despite the reduction of the shelf-life of these concentrates and the introduction of bacterial screening. In this study, the applicability of the BactiFlow flow cytometric assay for bacterial screening of platelet concentrates on day 2 or 3 of their shelf-life was assessed in two German blood services. The results were used to evaluate currently implemented or newly discussed screening strategies. Materials and methods Two thousand and ten apheresis platelet concentrates were tested on day 2 or day 3 after donation using BactiFlow flow cytometry. Reactive samples were confirmed by the BacT/Alert culture system. Results Twenty-four of the 2,100 platelet concentrates tested were reactive in the first test by BactiFlow. Of these 24 platelet concentrates, 12 were false-positive and the other 12 were initially reactive. None of the microbiological cultures of the initially reactive samples was positive. Parallel examination of 1,026 platelet concentrates by culture revealed three positive platelet concentrates with bacteria detected only in the anaerobic culture bottle and identified as Staphylococcus species. Two platelet concentrates were confirmed positive for Staphylcoccus epidermidis by culture. Retrospective analysis of the growth kinetics of the bacteria indicated that the bacterial titres were most likely below the diagnostic sensitivity of the BactiFlow assay (<300 CFU/mL) and probably had no transfusion relevance. Conclusions The BactiFlow assay is very convenient for bacterial screening of platelet concentrates independently of the testing day and the screening strategy. Although the optimal screening strategy could not be defined, this study provides further data to help achieve this goal. PMID:24887230
Rogeberg, Magnus; Vehus, Tore; Grutle, Lene; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa
2013-09-01
The single-run resolving power of current 10 μm id porous-layer open-tubular (PLOT) columns has been optimized. The columns studied had a poly(styrene-co-divinylbenzene) porous layer (~0.75 μm thickness). In contrast to many previous studies that have employed complex plumbing or compromising set-ups, SPE-PLOT-LC-MS was assembled without the use of additional hardware/noncommercial parts, additional valves or sample splitting. A comprehensive study of various flow rates, gradient times, and column length combinations was undertaken. Maximum resolution for <400 bar was achieved using a 40 nL/min flow rate, a 400 min gradient and an 8 m long column. We obtained a 2.3-fold increase in peak capacity compared to previous PLOT studies (950 versus previously obtained 400, when using peak width = 2σ definition). Our system also meets or surpasses peak capacities obtained in recent reports using nano-ultra-performance LC conditions or long silica monolith nanocolumns. Nearly 500 proteins (1958 peptides) could be identified in just one single injection of an extract corresponding to 1000 BxPC3 beta catenin (-/-) cells, and ~1200 and 2500 proteins in extracts of 10,000 and 100,000 cells, respectively, allowing detection of central members and regulators of the Wnt signaling pathway.
NASA Astrophysics Data System (ADS)
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-01
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7 μg L-1was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-15
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7μgL(-1)was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.
Wan, Xinlong; Kim, Min Jee; Kim, Iksoo
2013-11-01
We newly sequenced mitochondrial genomes of Spodoptera litura and Cnaphalocrocis medinalis belonging to Lepidoptera to obtain further insight into mitochondrial genome evolution in this group and investigated the influence of optimal strategies on phylogenetic reconstruction of Lepidoptera. Estimation of p-distances of each mitochondrial gene for available taxonomic levels has shown the highest value in ND6, whereas the lowest values in COI and COII at the nucleotide level, suggesting different utility of each gene for different hierarchical group when individual genes are utilized for phylogenetic analysis. Phylogenetic analyses mainly yielded the relationships (((((Bombycoidea + Geometroidea) + Noctuoidea) + Pyraloidea) + Papilionoidea) + Tortricoidea), evidencing the polyphyly of Macrolepidoptera. The Noctuoidea concordantly recovered the familial relationships (((Arctiidae + Lymantriidae) + Noctuidae) + Notodontidae). The tests of optimality strategies, such as exclusion of third codon positions, inclusion of rRNA and tRNA genes, data partitioning, RY recoding approach, and recoding nucleotides into amino acids suggested that the majority of the strategies did not substantially alter phylogenetic topologies or nodal supports, except for the sister relationship between Lycaenidae and Pieridae only in the amino acid dataset, which was in contrast to the sister relationship between Lycaenidae and Nymphalidae in Papilionoidea in the remaining datasets.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J
2008-08-01
A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods.
ERIC Educational Resources Information Center
Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.
2015-01-01
Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…
Results from the NIST-EPA Interagency Agreement on Measurements and Standards in Aerosol Carbon: Sampling Regional PM2.5 for the Chemometric Optimization of Thermal-Optical Analysis Study will be presented at the American Association for Aerosol Research (AAAR) 24th Annual Confer...
Liu Yu; Guo Qiuquan; Nie Hengyong; Lau, W. M.; Yang Jun
2009-12-15
The mechanism of dynamic force modes has been successfully applied to many atomic force microscopy (AFM) applications, such as tapping mode and phase imaging. The high-order flexural vibration modes are recent advancement of AFM dynamic force modes. AFM optical lever detection sensitivity plays a major role in dynamic force modes because it determines the accuracy in mapping surface morphology, distinguishing various tip-surface interactions, and measuring the strength of the tip-surface interactions. In this work, we have analyzed optimization and calibration of the optical lever detection sensitivity for an AFM cantilever-tip ensemble vibrating in high-order flexural modes and simultaneously experiencing a wide range and variety of tip-sample interactions. It is found that the optimal detection sensitivity depends on the vibration mode, the ratio of the force constant of tip-sample interactions to the cantilever stiffness, as well as the incident laser spot size and its location on the cantilever. It is also found that the optimal detection sensitivity is less dependent on the strength of tip-sample interactions for high-order flexural modes relative to the fundamental mode, i.e., tapping mode. When the force constant of tip-sample interactions significantly exceeds the cantilever stiffness, the optimal detection sensitivity occurs only when the laser spot locates at a certain distance from the cantilever-tip end. Thus, in addition to the 'globally optimized detection sensitivity', the 'tip optimized detection sensitivity' is also determined. Finally, we have proposed a calibration method to determine the actual AFM detection sensitivity in high-order flexural vibration modes against the static end-load sensitivity that is obtained traditionally by measuring a force-distance curve on a hard substrate in the contact mode.
Kwak, Minjung; Jung, Sin-Ho
2014-05-30
Phase II clinical trials are often conducted to determine whether a new treatment is sufficiently promising to warrant a major controlled clinical evaluation against a standard therapy. We consider single-arm phase II clinical trials with right censored survival time responses where the ordinary one-sample logrank test is commonly used for testing the treatment efficacy. For planning such clinical trials, this paper presents two-stage designs that are optimal in the sense that the expected sample size is minimized if the new regimen has low efficacy subject to constraints of the type I and type II errors. Two-stage designs, which minimize the maximal sample size, are also determined. Optimal and minimax designs for a range of design parameters are tabulated along with examples.
Risticevic, Sanja; DeEll, Jennifer R; Pawliszyn, Janusz
2012-08-17
Metabolomics currently represents one of the fastest growing high-throughput molecular analysis platforms that refer to the simultaneous and unbiased analysis of metabolite pools constituting a particular biological system under investigation. In response to the ever increasing interest in development of reliable methods competent with obtaining a complete and accurate metabolomic snapshot for subsequent identification, quantification and profiling studies, the purpose of the current investigation is to test the feasibility of solid phase microextraction for advanced fingerprinting of volatile and semivolatile metabolites in complex samples. In particular, the current study is focussed on the development and optimization of solid phase microextraction (SPME) - comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC × GC-ToFMS) methodology for metabolite profiling of apples (Malus × domestica Borkh.). For the first time, GC × GC attributes in terms of molecular structure-retention relationships and utilization of two-dimensional separation space on orthogonal GC × GC setup were exploited in the field of SPME method optimization for complex sample analysis. Analytical performance data were assessed in terms of method precision when commercial coatings are employed in spiked metabolite aqueous sample analysis. The optimized method consisted of the implementation of direct immersion SPME (DI-SPME) extraction mode and its application to metabolite profiling of apples, and resulted in a tentative identification of 399 metabolites and the composition of a metabolite database far more comprehensive than those obtainable with classical one-dimensional GC approaches. Considering that specific metabolome constituents were for the first time reported in the current study, a valuable approach for future advanced fingerprinting studies in the field of fruit biology is proposed. The current study also intensifies the understanding of SPME
NASA Astrophysics Data System (ADS)
Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.
2011-05-01
The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method.
Dispersion-relation-preserving schemes for computational aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Webb, Jay C.
1992-01-01
Finite difference schemes that have the same dispersion relations as the original partial differential equations are referred to as dispersion-relation-preserving (DRP) schemes. A method to construct time marching DRP schemes by optimizing the finite difference approximations of the space and time derivatives in the wave number and frequency space is presented. A sequence of numerical simulations is then performed.
Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong
2012-01-01
We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193
Lucchinetti, E; Stüssi, E
2004-01-01
Measuring the elasticity constants of biological materials often sets important constraints, such as the limited size or the irregular geometry of the samples. In this paper, the identification approach as applied to the specific problem of accurately retrieving the material properties of small bone samples from a measured displacement field is discussed. The identification procedure can be formulated as an optimization problem with the goal of minimizing the difference between computed and measured displacements by searching for an appropriate set of material parameters using dedicated algorithms. Alternatively, the backcalculation of the material properties from displacement maps can be implemented using artificial neural networks. In a practical situation, however, measurement errors strongly affect the identification results, calling for robust optimization approaches in order accurately to retrieve the material properties from error-polluted sample deformation maps. Using a simple model problem, the performances of both classical and neural network driven optimization are compared. When performed before the collection of experimental data, this evaluation can be very helpful in pinpointing potential problems with the envisaged experiments such as the need for a sufficient signal-to-noise ratio, particularly important when working with small tissue samples such as specimens cut from rodent bones or single bone trabeculae.
Dorn-In, Samart; Bassitta, Rupert; Schwaiger, Karin; Bauer, Johann; Hölzel, Christina S
2015-06-01
Universal primers targeting the bacterial 16S-rRNA-gene allow quantification of the total bacterial load in variable sample types by qPCR. However, many universal primer pairs also amplify DNA of plants or even of archaea and other eukaryotic cells. By using these primers, the total bacterial load might be misevaluated, whenever samples contain high amounts of non-target DNA. Thus, this study aimed to provide primer pairs which are suitable for quantification and identification of bacterial DNA in samples such as feed, spices and sample material from digesters. For 42 primers, mismatches to the sequence of chloroplasts and mitochondria of plants were evaluated. Six primer pairs were further analyzed with regard to the question whether they anneal to DNA of archaea, animal tissue and fungi. Subsequently they were tested with sample matrix such as plants, feed, feces, soil and environmental samples. To this purpose, the target DNA in the samples was quantified by qPCR. The PCR products of plant and feed samples were further processed for the Single Strand Conformation Polymorphism method followed by sequence analysis. The sequencing results revealed that primer pair 335F/769R amplified only bacterial DNA in samples such as plants and animal feed, in which the DNA of plants prevailed.
1972-08-01
35609 Advanced Techniques Branch Plans and Programs Analysis Division Directorate for Product Assurance U. S. Army Missile Command Redstone Arsenal...Ray Heathcock Advanced Techniques Branch Plans and Programs Analysis Division Directorate for Product Assurance U. S. Army Missile Command...for Product Assurance has established a rather unique computer program for handling a variety of chain sampling schemes and is available for
An adaptive additive inflation scheme for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Wójciak-Kosior, Magdalena; Szwerc, Wojciech; Strzemski, Maciej; Wichłacz, Zoltan; Sawicki, Jan; Kocjan, Ryszard; Latalski, Michał; Sowa, Ireneusz
2017-04-01
Trace analysis plays an important role in medicine for diagnosis of various disorders; however, the appropriate sample preparation is required mostly including mineralization. Although graphite furnace atomic absorption spectrometry (GF AAS) allows the investigation of biological samples such as blood, serum, and plasma without this step, it is rarely used for direct analysis because the residues of the rich organic matrix inside the furnace are difficult to remove and this may cause spectral/matrix interferences and decrease the lifetime of the graphite tube. In our work, the procedure for determination of Se, Cr, Mn, Co, Ni, Cd and Pb with the use of the high resolution continuum source GF-AAS technique in whole blood samples with minimum sample pre-treatment was elaborated. The pyrolysis and atomization temperature as well as the time of signal integration were optimized to obtain the highest intensity and repeatability of the analytical signal. Moreover, due to the apparatus modification, an additional step was added in the for graphite furnace temperature program with minimal argon flow and maximal flow of air during pyrolysis stage to increase the oxidative condition for better matrix removal. The accuracy and precision of the optimized method was verified using certified reference material (CRM) Seronorm Trace Elements Whole Blood L-1 and the developed method was applied for trace analysis of blood samples from volunteer patients of the Orthopedics Department.
NASA Astrophysics Data System (ADS)
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Cardozo, Manuelle C; Cavalcante, Dannuza D; Silva, Daniel L F; Santos, Walter N L Dos; Bezerra, Marcos A
2016-09-01
A method was developed for determination of total antimony in hair samples from patients undergoing chemotherapy against Leishmaniasis based on the administration of pentavalent antimonial drugs. The method is based on microwave assisted digestion of the samples in a pressurized system, reduction of Sb5+ to Sb3+ with KI solution (10% w/v) in ascorbic acid (2%, w/v) and its subsequent determination by hydride generation atomic fluorescence spectrometry (HG-AFS). The proportions of each component (HCl, HNO3 and water) used in the digestion were studied applying a constrained mixtures design. The optimal proportions found were 50% water, 25% HNO3 and 25% HCl. Variables involved in the generation of antimony hydride were optimized using a Doehlert design revealing that good sensitivity is found when using 2.0% w/v NaBH4 and 4.4 mol L-1 HCl. Under the optimum experimental conditions, the method allows the determination of antimony in hair samples with detection and quantification limits of 1.4 and 4.6 ng g-1, respectively, and precision expressed as relative standard deviation (RSD) of 2.8% (n = 10 to 10.0 mg L-1). The developed method was applied in the analysis of hair samples from patients who take medication against Leishmaniasis.
FRESCO: flexible alignment with rectangle scoring schemes.
Dalca, A V; Brudno, M
2008-01-01
While the popular DNA sequence alignment tools incorporate powerful heuristics to allow for fast and accurate alignment of DNA, most of them still optimize the classical Needleman Wunsch scoring scheme. The development of novel scoring schemes is often hampered by the difficulty of finding an optimizing algorithm for each non-trivial scheme. In this paper we define the broad class of rectangle scoring schemes, and describe an algorithm and tool that can align two sequences with an arbitrary rectangle scoring scheme in polynomial time. Rectangle scoring schemes encompass some of the popular alignment scoring metrics currently in use, as well as many other functions. We investigate a novel scoring function based on minimizing the expected number of random diagonals observed with the given scores and show that it rivals the LAGAN and Clustal-W aligners, without using any biological or evolutionary parameters. The FRESCO program, freely available at http://compbio.cs.toronto.edu/fresco, gives bioinformatics researchers the ability to quickly compare the performance of other complex scoring formulas without having to implement new algorithms to optimize them.
Noss, Ilka; Doekes, Gert; Sander, Ingrid; Heederik, Dick J J; Thorne, Peter S; Wouters, Inge M
2010-08-01
We recently introduced a passive dust sampling method for airborne endotoxin and glucan exposure assessment-the electrostatic dustfall collector (EDC). In this study, we assessed the effects of different storage and extraction procedures on measured endotoxin and glucan levels, using 12 parallel EDC samples from 10 low exposed indoor environments. Additionally, we compared 2- and 4-week sampling with the prospect of reaching higher dust yields. Endotoxin concentrations were highest after extraction with pyrogen-free water (pf water) + Tween. Phosphate-buffered saline (PBS)-Tween yielded significantly (44%) lower levels, and practically no endotoxin was detected after extraction in pf water without Tween. Glucan levels were highest after extraction in PBS-Tween at 120 degrees C, whereas extracts made in NaOH at room temperature or 120 degrees C were completely negative. Direct extraction from the EDC cloth or sequential extraction after a preceding endotoxin extraction yielded comparable glucan levels. Sample storage at different temperatures before extraction did not affect endotoxin and glucan concentrations. Doubling the sampling duration yielded similar endotoxin and only 50% higher glucan levels. In conclusion, of the tested variables, the extraction medium was the predominant factor affecting endotoxin and glucan yields.
Ou, Chunping; St-Hilaire, André; Ouarda, Taha B M J; Conly, F Malcolm; Armstrong, Nicole; Khalil, Bahaa; Proulx-McInnis, Sandra
2012-12-01
The assessment of the adequacy of sampling locations is an important aspect in the validation of an effective and efficient water quality monitoring network. Two geostatistical approaches (e.g., kriging and Moran's I) are presented to assess multiple sampling locations. A flexible and comprehensive framework was developed for the selection of multiple sampling locations of multiple variables which was accomplished by coupling geostatistical approaches with principal component analysis (PCA) and fuzzy optimal model (FOM). The FOM was used in the integrated assessment of both multiple principal components and multiple geostatistical approaches. These integrated methods were successfully applied to the assessment of two independent water quality monitoring networks (WQMNs) of Lake Winnipeg, Canada, which respectively included 14 and 30 stations from 2006 to 2010.
Zeeman, Matthias J; Werner, Roland A; Eugster, Werner; Siegwolf, Rolf T W; Wehrle, Günther; Mohn, Joachim; Buchmann, Nina
2008-12-01
The application of (13)C/(12)C in ecosystem-scale tracer models for CO(2) in air requires accurate measurements of the mixing ratios and stable isotope ratios of CO(2). To increase measurement reliability and data intercomparability, as well as to shorten analysis times, we have improved an existing field sampling setup with portable air sampling units and developed a laboratory setup for the analysis of the delta(13)C of CO(2) in air by isotope ratio mass spectrometry (IRMS). The changes consist of (a) optimization of sample and standard gas flow paths, (b) additional software configuration, and (c) automation of liquid nitrogen refilling for the cryogenic trap. We achieved a precision better than 0.1 per thousand and an accuracy of 0.11 +/- 0.04 per thousand for the measurement of delta(13)C of CO(2) in air and unattended operation of measurement sequences up to 12 h.
Soylak, Mustafa; Tuzen, Mustafa; Souza, Anderson Santos; das Graças Andrade Korn, Maria; Ferreira, Sérgio Luis Costa
2007-10-22
The present paper describes the development of a microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry (FAAS). The optimization step was performed using a full factorial design (2(3)) involving the factors: composition of the acid mixture (CMA), microwave power (MP) and radiation time (RT). The experiments of this factorial were carried out using a certified reference material of tea GBW 07605 furnished by National Research Centre for Certified Reference Materials, China, being the metal recoveries considered as response. The relative standard deviations of the method were found below 8% for the three elements. The procedure proposed was used for the determination of copper, zinc and nickel in several samples of tea from Turkey. For 10 tea samples analyzed, the concentration achieved for copper, zinc and nickel varied at 6.4-13.1, 7.0-16.5 and 3.1-5.7 (microg g(-1)), respectively.
Farooq, Hashim; Courtier-Murias, Denis; Soong, Ronald; Masoom, Hussain; Maas, Werner; Fey, Michael; Kumar, Rajeev; Monette, Martine; Stronks, Henry; Simpson, Myrna J; Simpson, André J
2013-03-01
A method is presented that combines Carr-Purcell-Meiboom-Gill (CPMG) during acquisition with either selective or nonselective excitation to produce a considerable intensity enhancement and a simultaneous loss in chemical shift information. A range of parameters can theoretically be optimized very rapidly on the basis of the signal from the entire sample (hard excitation) or spectral subregion (soft excitation) and should prove useful for biological, environmental, and polymer samples that often exhibit highly dispersed and broad spectral profiles. To demonstrate the concept, we focus on the application of our method to T(1) determination, specifically for the slowest relaxing components in a sample, which ultimately determines the optimal recycle delay in quantitative NMR. The traditional inversion recovery (IR) pulse program is combined with a CPMG sequence during acquisition. The slowest relaxing components are selected with a shaped pulse, and then, low-power CPMG echoes are applied during acquisition with intervals shorter than chemical shift evolution (RCPMG) thus producing a single peak with an SNR commensurate with the sum of the signal integrals in the selected region. A traditional (13)C IR experiment is compared with the selective (13)C IR-RCPMG sequence and yields the same T(1) values for samples of lysozyme and riverine dissolved organic matter within error. For lysozyme, the RCPMG approach is ~70 times faster, and in the case of dissolved organic matter is over 600 times faster. This approach can be adapted for the optimization of a host of parameters where chemical shift information is not necessary, such as cross-polarization/mixing times and pulse lengths.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert
2015-06-01
Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.
Chen, Laiguo; Huang, Yumei; Han, Shuang; Feng, Yongbin; Jiang, Guo; Tang, Caiming; Ye, Zhixiang; Zhan, Wei; Liu, Ming; Zhang, Sukun
2013-01-25
Accurately quantifying short chain chlorinated paraffins (SCCPs) in soil samples with gas chromatograph coupled with electron capture negative ionization mass spectrometry (GC-ECNI-MS) is difficult because many other polychlorinated pollutants are present in the sample matrices. These pollutants (e.g., polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs) and toxaphene) can cause serious interferences during SCCPs analysis with GC-MS. Four main columns packed with different adsorbents, including silica gel, Florisil and alumina, were investigated in this study to determine their performance for separating interfering pollutants from SCCPs. These experimental results suggest that the optimum cleanup procedure uses a silica gel column and a multilayer silica gel-Florisil composite column. This procedure completely separated 22 PCB congeners, 23 OCPs and three toxaphene congeners from SCCPs. However, p,p'-DDD, cis-nonachlor and o,p'-DDD were not completely removed and only 53% of the total toxaphene was removed. This optimized method was successfully and effectively applied for removing interfering pollutants from real soil samples. SCCPs in 17 soil samples from different land use areas within a suburban region were analyzed with the established method. The concentrations of SCCPs in these samples were between 7 and 541 ng g(-1) (mean: 84 ng g(-1)). Similar homologue SCCPs patterns were observed between the soil samples collected from different land use areas. In addition, lower chlorinated (Cl(6/7)) C(10)- and C(11)- SCCPs were the dominant congeners.
Zuloaga, O; Etxebarria, N; Fernández, L A; Madariaga, J M
2000-08-01
The microwave-assisted extraction (MAE), accelerated solvent extraction (ASE) and Soxhlet extraction of two isomers of hexachlorocyclohexane, alpha-HCH and gamma-HCH, from a polluted landfill soil have been optimized following different experimental designs. In the case of microwave-assisted extraction, the following variables were considered: pressure, extraction time, microwave power, percentage of acetone in n-hexane mixture and solvent volume. When ASE extraction was studied the variables were pressure, temperature and extraction time. Finally, the percentage of acetone in n-hexane mixture and the extraction time were the only variables studied for Soxhlet extraction. The concentrations obtained by the three extraction techniques were, within their experimental uncertainties, in good agreement. This fact assures the possibility of using both ASE and MAE techniques in the routine determination of lindane in polluted soils and sediments.
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.
2009-12-01
Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not
NASA Astrophysics Data System (ADS)
Linnér, Elisabeth Schold; Morén, Max; Smed, Karl-Oskar; Nysjö, Johan; Strand, Robin
In this paper, we present LatticeLibrary, a C++ library for general processing of 2D and 3D images sampled on arbitrary lattices. The current implementation supports the Cartesian Cubic (CC), Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices, and is designed to facilitate addition of other sampling lattices. We also introduce BccFccRaycaster, a plugin for the existing volume renderer Voreen, making it possible to view CC, BCC and FCC data, using different interpolation methods, with the same application. The plugin supports nearest neighbor and trilinear interpolation at interactive frame rates. These tools will enable further studies of the possible advantages of non-Cartesian lattices in a wide range of research areas.
Zhuang, Joanna J; Zondervan, Krina; Nyberg, Fredrik; Harbron, Chris; Jawaid, Ansar; Cardon, Lon R; Barratt, Bryan J; Morris, Andrew P
2010-01-01
Genome-wide association (GWA) studies have proved extremely successful in identifying novel genetic loci contributing effects to complex human diseases. In doing so, they have highlighted the fact that many potential loci of modest effect remain undetected, partly due to the need for samples consisting of many thousands of individuals. Large-scale international initiatives, such as the Wellcome Trust Case Control Consortium, the Genetic Association Information Network, and the database of genetic and phenotypic information, aim to facilitate discovery of modest-effect genes by making genome-wide data publicly available, allowing information to be combined for the purpose of pooled analysis. In principle, disease or control samples from these studies could be used to increase the power of any GWA study via judicious use as “genetically matched controls” for other traits. Here, we present the biological motivation for the problem and the theoretical potential for expanding the control group with publicly available disease or reference samples. We demonstrate that a naïve application of this strategy can greatly inflate the false-positive error rate in the presence of population structure. As a remedy, we make use of genome-wide data and model selection techniques to identify “axes” of genetic variation which are associated with disease. These axes are then included as covariates in association analysis to correct for population structure, which can result in increases in power over standard analysis of genetic information from the samples in the original GWA study. Genet. Epidemiol. 34: 319–326, 2010. © 2010 Wiley-Liss, Inc. PMID:20088020
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Latrous El Atrache, Latifa; Ben Sghaier, Rafika; Bejaoui Kefi, Bochra; Haldys, Violette; Dachraoui, Mohamed; Tortajada, Jeanine
2013-12-15
An experimental design was applied for the optimization of extraction process of carbamates pesticides from surface water samples. Solid phase extraction (SPE) of carbamates compounds and their determination by liquid chromatography coupled to electrospray mass spectrometry detector were considered. A two level full factorial design 2(k) was used for selecting the variables which affected the extraction procedure. Eluent and sample volumes were statistically the most significant parameters. These significant variables were optimized using Doehlert matrix. The developed SPE method included 200mg of C-18 sorbent, 143.5 mL of water sample and 5.5 mL of acetonitrile in the elution step. For validation of the technique, accuracy, precision, detection and quantification limits, linearity, sensibility and selectivity were evaluated. Extraction recovery percentages of all the carbamates were above 90% with relative standard deviations (R.S.D.) in the range of 3-11%. The extraction method was selective and the detection and quantification limits were between 0.1 and 0.5 µg L(-1), and 1 and 3 µg L(-1), respectively.
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-15
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji
2016-08-01
In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine.
Sarrut, Morgan; Rouvière, Florent; Heinisch, Sabine
2017-01-23
This study was devoted to the search for conditions leading to highly efficient sub-hour separations of complex peptide samples with the objective of coupling to mass spectrometry. In this context, conditions for one dimensional reversed phase liquid chromatography (1D-RPLC) were optimized on the basis of a kinetic approach while conditions for on-line comprehensive two-dimensional liquid chromatography using reversed phase in both dimensions (on-line RPLCxRPLC) were optimized on the basis of a Pareto-optimal approach. Maximizing the peak capacity while minimizing the dilution factor for different analysis times (down to 5min) were the two objectives under consideration. For gradient times between 5 and 60min, 15cm was found to be the best column length in RPLC with sub-2μm particles under 800bar as system pressure. In RPLCxRPLC, for less than one hour as first dimension gradient time, the sampling rate was found to be a key parameter in addition to conventional parameters including column dimension, particle size, flow-rate and gradient conditions in both dimensions. It was shown that the optimum sampling rate was as low as one fraction per peak for very short gradient times (i.e. below 10min). The quality descriptors obtained under optimized RPLCxRPLC conditions were compared to those obtained under optimized RPLC conditions. Our experimental results for peptides, obtained with state of the art instrumentation, showed that RPLCxRPLC could outperform 1D-RPLC for gradient times longer than 5min. In 60min, the same peak intensity (same dilution) was observed with both techniques but with a 3-fold lower injected amount in RPLCxRPLC. A significant increase of the signal-to-noise ratio mainly due to a strong noise reduction was observed in RPLCxRPLC-MS compared to the one in 1D-RPLC-MS making RPLCxRPLC-MS a promising technique for peptide identification in complex matrices.
NASA Astrophysics Data System (ADS)
Metzger, Stefan; Burba, George; Burns, Sean P.; Blanken, Peter D.; Li, Jiahong; Luo, Hongyan; Zulueta, Rommel C.
2016-03-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure’s accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children’s mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673
NASA Astrophysics Data System (ADS)
Liu, Junwen; Li, Jun; Ding, Ping; Zhang, Yanlin; Liu, Di; Shen, Chengde; Zhang, Gan
2017-04-01
Radiocarbon (14C) analysis is a unique tool that can be used to directly apportion organic carbon (OC) and elemental carbon (EC) into fossil and non-fossil fractions. In this study, a coupled carbon analyzer and high-vacuum setup was established to collect atmospheric OC and EC. We thoroughly investigated the correlations between 14C levels and mass recoveries of OC and EC using urban PM2.5 samples collected from a city in central China and found that: (1) the 14C signal of the OC fraction collected in the helium phase of the EUSSAR_2 protocol (200 °C for 120 s, 300 °C for 150 s, 450 °C for 180 s, and 650 °C for 180 s) was representative of the entire OC fraction, with a relative error of approximately 6%, and (2) after thermal treatments of 120 s at 200 °C, 150 s at 300 °C, and 180 s at 475 °C in an oxidative atmosphere (10% oxygen, 90% helium) and 180 s at 650 °C in helium, the remaining EC fraction sufficiently represented the 14C level of the entire EC, with a relative error of <10%. The average recovery of the OC and EC fractions for 14C analysis was 64± 7% (n = 5) and 87 ± 5% (n = 5), respectively. The fraction of modern carbon in the OC and EC of reference material (RM) 8785 was 0.564 ± 0.013 and 0.238 ± 0.006, respectively. Analysis of 14C levels in four selected PM2.5 samples in Xinxiang, China revealed that the relative contribution of fossil sources in OC and EC in the PM2.5 samples were 50.5± 5.8% and 81.4± 2.6%, respectively, which are comparable to findings in previous studies conducted in other Chinese cities. We confirmed that most urban EC derives from fossil fuel combustion processes, whereas both fossil and non-fossil sources have comparable and important impacts on OC. Our results suggested that water-soluble organic carbon (WSOC) and its pyrolytic carbon can be completely removed before EC collection via the method employed in this study.
Optimization of a gas sampling system for measuring eddy-covariance fluxes of H2O and CO2
NASA Astrophysics Data System (ADS)
Metzger, S.; Burba, G.; Burns, S. P.; Blanken, P. D.; Li, J.; Luo, H.; Zulueta, R. C.
2015-10-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) will provide the ability of unbiased ecological inference across eco-climatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analysers are widely employed for eddy-covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties, and requires correction. Here, we show that the gas sampling system substantially contributes to high-frequency attenuation, which can be minimized by careful design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyser cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh
A classification scheme for risk assessment methods.
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report
Thomson, Amara C; Ramos, Joyce S; Fassett, Robert G; Coombes, Jeff S; Dalleck, Lance C
2015-01-01
This study sought to determine the optimal criteria and sampling interval to detect a V̇O2 plateau at V̇O2max in patients with metabolic syndrome. Twenty-three participants with criteria-defined metabolic syndrome underwent a maximal graded exercise test. Four different sampling intervals and three different V̇O2 plateau criteria were analysed to determine the effect of each parameter on the incidence of V̇O2 plateau at V̇O2max. Seventeen tests were classified as maximal based on attainment of at least two out of three criteria. There was a significant (p < 0.05) effect of 15-breath (b) sampling interval on the incidence of V̇O2 plateau at V̇O2max across the ≤ 50 and ≤ 80 mL ∙ min(-1) conditions. Strength of association was established by the Cramer's V statistic (φc); (≤ 50 mL ∙ min(-1) [φc = 0.592, p < 0.05], ≤ 80 mL ∙ min(-1) [φc = 0.383, p < 0.05], ≤ 150 mL ∙ min(-1) [φc = 0.246, p > 0.05]). When conducting maximal stress tests on patients with metabolic syndrome, a 15-b sampling interval and ≤ 50 mL ∙ min(-1) criteria should be implemented to increase the likelihood of detecting V̇O2 plateau at V̇O2max.
Chai, Xutian; Dong, Rui; Liu, Wenxian; Wang, Yanrong; Liu, Zhipeng
2017-03-31
Common vetch (Vicia sativa subsp. sativa L.) is a self-pollinating annual forage legume with worldwide importance. Here, we investigate the optimal number of individuals that may represent the genetic diversity of a single population, using Start Codon Targeted (SCoT) markers. Two cultivated varieties and two wild accessions were evaluated using five SCoT primers, also testing different sampling sizes: 1, 2, 3, 5, 8, 10, 20, 30, 40, 50, and 60 individuals. The results showed that the number of alleles and the Polymorphism Information Content (PIC) were different among the four accessions. Cluster analysis by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) and STRUCTURE placed the 240 individuals into four distinct clusters. The Expected Heterozygosity (HE) and PIC increased along with an increase in sampling size from 1 to 10 plants but did not change significantly when the sample sizes exceeded 10 individuals. At least 90% of the genetic variation in the four germplasms was represented when the sample size was 10. Finally, we concluded that 10 individuals could effectively represent the genetic diversity of one vetch population based on the SCoT markers. This study provides theoretical support for genetic diversity, cultivar identification, evolution, and marker-assisted selection breeding in common vetch.
Wüst, Thomas; Landau, David P
2012-08-14
Coarse-grained (lattice-) models have a long tradition in aiding efforts to decipher the physical or biological complexity of proteins. Despite the simplicity of these models, however, numerical simulations are often computationally very demanding and the quest for efficient algorithms is as old as the models themselves. Expanding on our previous work [T. Wüst and D. P. Landau, Phys. Rev. Lett. 102, 178101 (2009)], we present a complete picture of a Monte Carlo method based on Wang-Landau sampling in combination with efficient trial moves (pull, bond-rebridging, and pivot moves) which is particularly suited to the study of models such as the hydrophobic-polar (HP) lattice model of protein folding. With this generic and fully blind Monte Carlo procedure, all currently known putative ground states for the most difficult benchmark HP sequences could be found. For most sequences we could also determine the entire energy density of states and, together with suitably designed structural observables, explore the thermodynamics and intricate folding behavior in the virtually inaccessible low-temperature regime. We analyze the differences between random and protein-like heteropolymers for sequence lengths up to 500 residues. Our approach is powerful both in terms of robustness and speed, yet flexible and simple enough for the study of many related problems in protein folding.
NASA Astrophysics Data System (ADS)
Wüst, Thomas; Landau, David P.
2012-08-01
Coarse-grained (lattice-) models have a long tradition in aiding efforts to decipher the physical or biological complexity of proteins. Despite the simplicity of these models, however, numerical simulations are often computationally very demanding and the quest for efficient algorithms is as old as the models themselves. Expanding on our previous work [T. Wüst and D. P. Landau, Phys. Rev. Lett. 102, 178101 (2009)], 10.1103/PhysRevLett.102.178101, we present a complete picture of a Monte Carlo method based on Wang-Landau sampling in combination with efficient trial moves (pull, bond-rebridging, and pivot moves) which is particularly suited to the study of models such as the hydrophobic-polar (HP) lattice model of protein folding. With this generic and fully blind Monte Carlo procedure, all currently known putative ground states for the most difficult benchmark HP sequences could be found. For most sequences we could also determine the entire energy density of states and, together with suitably designed structural observables, explore the thermodynamics and intricate folding behavior in the virtually inaccessible low-temperature regime. We analyze the differences between random and protein-like heteropolymers for sequence lengths up to 500 residues. Our approach is powerful both in terms of robustness and speed, yet flexible and simple enough for the study of many related problems in protein folding.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.
SU-C-207-03: Optimization of a Collimator-Based Sparse Sampling Technique for Low-Dose Cone-Beam CT
Lee, T; Cho, S; Kim, I; Han, B
2015-06-15
Purpose: In computed tomography (CT) imaging, radiation dose delivered to the patient is one of the major concerns. Sparse-view CT takes projections at sparser view angles and provides a viable option to reducing dose. However, a fast power switching of an X-ray tube, which is needed for the sparse-view sampling, can be challenging in many CT systems. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. In this study, we investigated the effects of collimator parameters on the image quality and aimed to optimize the collimator design. Methods: We used a bench-top circular cone-beam CT system together with a CatPhan600 phantom, and took 1440 projections from a single rotation. The multi-slit collimator made of tungsten was mounted on the X-ray source for beam blocking. For image reconstruction, we used a total-variation minimization (TV) algorithm and modified the backprojection step so that only the measured data through the collimator slits are to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. We also analyzed the sampling efficiency: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/ro. Results: Consistent results in the image quality have been produced with the sampling efficiency, and the optimum condition was found to be using 12 slits at 30 Hz/ro. As image quality indices, we used the CNR and the detectability. Conclusion: We conducted an experiment with a moving multi-slit collimator to realize a sparse-sampled cone-beam CT. Effects of collimator parameters on the image quality have been systematically investigated, and the optimum condition has been reached.
Twin Signature Schemes, Revisited
NASA Astrophysics Data System (ADS)
Schäge, Sven
In this paper, we revisit the twin signature scheme by Naccache, Pointcheval and Stern from CCS 2001 that is secure under the Strong RSA (SRSA) assumption and improve its efficiency in several ways. First, we present a new twin signature scheme that is based on the Strong Diffie-Hellman (SDH) assumption in bilinear groups and allows for very short signatures and key material. A big advantage of this scheme is that, in contrast to the original scheme, it does not require a computationally expensive function for mapping messages to primes. We prove this new scheme secure under adaptive chosen message attacks. Second, we present a modification that allows to significantly increase efficiency when signing long messages. This construction uses collision-resistant hash functions as its basis. As a result, our improvements make the signature length independent of the message size. Our construction deviates from the standard hash-and-sign approach in which the hash value of the message is signed in place of the message itself. We show that in the case of twin signatures, one can exploit the properties of the hash function as an integral part of the signature scheme. This improvement can be applied to both the SRSA based and SDH based twin signature scheme.
Hu, Lingzhi; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Traughber, Melanie; Muzic, Raymond F.
2014-01-01
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2∗ = 1/T2∗, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2∗ of human skull was measured as 0.2–0.3 ms−1 depending on the specific region, which is more than ten times greater than the R2∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone
Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu
2014-10-15
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in
Practical formulation of a positively conservative scheme
NASA Astrophysics Data System (ADS)
Obayashi, Shigeru; Wada, Yasuhiro
1994-05-01
Approximate Riemann solvers have been highly successful for computing the Euler/Navier-Stokes equations, but linearized Riemann solvers are known to fail occasionally by predicting non-physical states with negative density or internal energy. Positively conservative schemes, in contrast, guarantee physical solutions from realistic input. The Harten-Lax-van Leer-Einfeldt (HLLE) scheme is a typical example of a positively conservative scheme. However, the HLLE scheme is highly dissipative at contact discontinuities and shear layers and thus it is not applicable to practicle simulations. An existing modification to the HLLE scheme, known as HLLEM, enhances the resolution to that of the Roe scheme. However, this modification violates the positivity of density and internal energy. Precise derivation of the modification yields a quatratic inequality and thus requires a case-by-case treatment. This Note describes a new, modified HLLE scheme that satisfies the positively conservative condition approximately. Sample computationa are included to demonstrate the resolution and the robustness of the scheme.
Practical formulation of a positively conservative scheme
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Wada, Yasuhiro
1994-01-01
Approximate Riemann solvers have been highly successful for computing the Euler/Navier-Stokes equations, but linearized Riemann solvers are known to fail occasionally by predicting non-physical states with negative density or internal energy. Positively conservative schemes, in contrast, guarantee physical solutions from realistic input. The Harten-Lax-van Leer-Einfeldt (HLLE) scheme is a typical example of a positively conservative scheme. However, the HLLE scheme is highly dissipative at contact discontinuities and shear layers and thus it is not applicable to practicle simulations. An existing modification to the HLLE scheme, known as HLLEM, enhances the resolution to that of the Roe scheme. However, this modification violates the positivity of density and internal energy. Precise derivation of the modification yields a quatratic inequality and thus requires a case-by-case treatment. This Note describes a new, modified HLLE scheme that satisfies the positively conservative condition approximately. Sample computationa are included to demonstrate the resolution and the robustness of the scheme.
Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.
Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua
2016-09-05
In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.
Chung, Wu-Hsun; Tzing, Shin-Hwa; Ding, Wang-Hsien
2015-09-11
A solvent-free method for the rapid analysis of six benzophenone-type UV absorbers in water samples is described. The method involves the use of dispersive micro solid-phase extraction (DmSPE) followed by the simultaneous silylation and thermal desorption (SSTD) gas chromatography-mass spectrometry (GC-MS) operating in the selected-ion-storage (SIS) mode. A Plackett-Burman design was used for screening and a central composite design (CCD) for optimizing the significant factors was applied. The optimal experimental conditions involved immersing 1.5mg of the Oasis HLB adsorbent in a 10mL portion of water sample. After vigorous shaking for 1min, the adsorbents were transferred to a micro-vial, and were dried at 122°C for 3.5min, after cooling, 2μL of the BSTFA silylating reagent was added. For SSTD, the injection-port temperature was held at 70°C for 2.5min for derivatization, and the temperature was then rapidly increased to 340°C to allow the thermal desorption of the TMS-derivatives into the GC for 5.7min. The limits of quantitation (LOQs) were determined to be 1.5-5.0ng/L. Precision, as indicated by relative standard deviations (RSDs), was equal or less than 11% for both intra- and inter-day analysis. Accuracy, expressed as the mean extraction recovery, was between 87% and 95%. A preliminary analysis of the municipal wastewater treatment plant (MWTP) effluent and river water samples revealed that 2-hydroxy-4-methoxybenzophenone (BP-3) was the most common benzophenone-type UV absorber present. Using a standard addition method, the total concentrations of these compounds ranged from 5.1 to 74.8ng/L.
Ferreirós, N; Iriarte, G; Alonso, R M; Jiménez, R M
2006-05-15
A chemometric approach was applied for the optimization of the extraction and separation of the antihypertensive drug eprosartan from human plasma samples. MultiSimplex program was used to optimize the HPLC-UV method due to the number of experimental and response variables to be studied. The measured responses were the corrected area, the separation of eprosartan chromatographic peak from plasma interferences peaks and the retention time of the analyte. The use of an Atlantis dC18, 100mmx3.9mm i.d. chromatographic column with a 0.026% trifluoroacetic acid (TFA) in the organic phase and 0.031% TFA in the aqueous phase, an initial composition of 80% aqueous phase in the mobile phase, a stepness of acetonitrile of 3% during the gradient elution mode with a flow rate of 1.25mL/min and a column temperature of 35+/-0.2 degrees C allowed the separation of eprosartan and irbesartan used as internal standard from plasma endogenous compounds. In the solid phase extraction procedure, experimental design was used in order to achieve a maximum recovery percentage. Firstly, the significant variables were chosen by way of fractional factorial design; then, a central composite design was run to obtain the more adequate values of the significant variables. Thus, the extraction procedure for spiked human plasma samples was carried out using C8 cartridges, phosphate buffer pH 2 as conditioning agent, a drying step of 10min, a washing step with methanol-phosphate buffer (20:80, v/v) and methanol as eluent liquid. The SPE-HPLC-UV developed method allowed the separation and quantitation of eprosartan from human plasma samples with an adequate resolution and a total analysis time of 1h.
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... sampling