Annotating user-defined abstractions for optimization
Quinlan, D; Schordan, M; Vuduc, R; Yi, Q
2005-12-05
This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.
Max, N. |
1992-12-17
Radiosity algorithms for global illumination, either ``gathering`` or ``shooting`` versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.
Max, N. California Univ., Davis, CA )
1992-12-17
Radiosity algorithms for global illumination, either gathering'' or shooting'' versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.
Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples
Shine, E. P.; Poirier, M. R.
2013-10-29
Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Defining Analytical Strategies for Mars Sample Return with Analogue Missions
NASA Astrophysics Data System (ADS)
Osinski, G. R.; Sapers, H. M.; Francis, R.; Pontefract, A.; Tornabene, L. L.; Haltigin, T.
2016-05-01
The characterization of biosignatures in MSR samples will require integrated, cross-platform laboratory analyses carefully correlated and calibrated with Rover-based technologies. Analogue missions provide context for implementation and assessment.
(Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2015-12-01
Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.
Defining the Mars Ascent Problem for Sample Return
Whitehead, J
2008-07-31
Lifting geology samples off of Mars is both a daunting technical problem for propulsion experts and a cultural challenge for the entire community that plans and implements planetary science missions. The vast majority of science spacecraft require propulsive maneuvers that are similar to what is done routinely with communication satellites, so most needs have been met by adapting hardware and methods from the satellite industry. While it is even possible to reach Earth from the surface of the moon using such traditional technology, ascending from the surface of Mars is beyond proven capability for either solid or liquid propellant rocket technology. Miniature rocket stages for a Mars ascent vehicle would need to be over 80 percent propellant by mass. It is argued that the planetary community faces a steep learning curve toward nontraditional propulsion expertise, in order to successfully accomplish a Mars sample return mission. A cultural shift may be needed to accommodate more technical risk acceptance during the technology development phase.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Ecological and sampling constraints on defining landscape fire severity
Key, C.H.
2006-01-01
Ecological definition and detection of fire severity are influenced by factors of spatial resolution and timing. Resolution determines the aggregation of effects within a sampling unit or pixel (alpha variation), hence limiting the discernible ecological responses, and controlling the spatial patchiness of responses distributed throughout a burn (beta variation). As resolution decreases, alpha variation increases, extracting beta variation and complexity from the spatial model of the whole burn. Seasonal timing impacts the quality of radiometric data in terms of transmittance, sun angle, and potential contrast between responses within burns. Detection sensitivity candegrade toward the end of many fire seasons when low sun angles, vegetation senescence, incomplete burning, hazy conditions, or snow are common. Thus, a need exists to supersede many rapid response applications when remote sensing conditions improve. Lag timing, or timesince fire, notably shapes the ecological character of severity through first-order effects that only emerge with time after fire, including delayed survivorship and mortality. Survivorship diminishes the detected magnitude of severity, as burned vegetation remains viable and resprouts, though at first it may appear completely charred or consumed above ground. Conversely, delayed mortality increases the severity estimate when apparently healthy vegetation is in fact damaged by heat to the extent that it dies over time. Both responses dependon fire behavior and various species-specific adaptations to fire that are unique to the pre-firecomposition of each burned area. Both responses can lead initially to either over- or underestimating severity. Based on such implications, three sampling intervals for short-term burn severity are identified; rapid, initial, and extended assessment, sampled within about two weeks, two months, and depending on the ecotype, from three months to one year after fire, respectively. Spatial and temporal
Resolution optimization with irregularly sampled Fourier data
NASA Astrophysics Data System (ADS)
Ferrara, Matthew; Parker, Jason T.; Cheney, Margaret
2013-05-01
Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer-Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications.
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.
1979-01-01
Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417
NASA Technical Reports Server (NTRS)
Byrnes, C. I.
1980-01-01
It is noted that recent work by Kamen (1979) on the stability of half-plane digital filters shows that the problem of the existence of a feedback law also arises for other Banach algebras in applications. This situation calls for a realization theory and stabilizability criteria for systems defined over Banach for Frechet algebra A. Such a theory is developed here, with special emphasis placed on the construction of finitely generated realizations, the existence of coprime factorizations for T(s) defined over A, and the solvability of the quadratic optimal control problem and the associated algebraic Riccati equation over A.
A Source-to-Source Architecture for User-Defined Optimizations
Schordan, M; Quinlan, D
2003-02-06
The performance of object-oriented applications often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are user-defined and not part of the programming language itself only limited information on their high-level semantics can be leveraged through program analysis by the compiler and thus most often no appropriate high-level optimizations are performed. In this paper we outline an approach based on source-to-source transformation to allow users to define optimizations which are not performed by the compiler they use. These techniques are intended to be as easy and intuitive as possible for potential users; i.e. for designers of object-oriented libraries, people most often only with basic compiler expertise.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Towards optimal sampling schedules for integral pumping tests
NASA Astrophysics Data System (ADS)
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
In-depth analysis of sampling optimization methods
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Kim, Myoungsoo; Habets, Boris; Buhl, Stefan; Guhlemann, Steffen; Rößiger, Martin; Bellmann, Enrico; Kim, Seop
2016-03-01
High order overlay and alignment models require good coverage of overlay or alignment marks on the wafer. But dense sampling plans are not possible for throughput reasons. Therefore, sampling plan optimization has become a key issue. We analyze the different methods for sampling optimization and discuss the different knobs to fine-tune the methods to constraints of high volume manufacturing. We propose a method to judge sampling plan quality with respect to overlay performance, run-to-run stability and dispositioning criteria using a number of use cases from the most advanced lithography processes.
Optimal sampling schedule for chemical exchange saturation transfer.
Tee, Y K; Khrapitchev, A A; Sibson, N R; Payne, S J; Chappell, M A
2013-11-01
The sampling schedule for chemical exchange saturation transfer imaging is normally uniformly distributed across the saturation frequency offsets. When this kind of evenly distributed sampling schedule is used to quantify the chemical exchange saturation transfer effect using model-based analysis, some of the collected data are minimally informative to the parameters of interest. For example, changes in labile proton exchange rate and concentration mainly affect the magnetization near the resonance frequency of the labile pool. In this study, an optimal sampling schedule was designed for a more accurate quantification of amine proton exchange rate and concentration, and water center frequency shift based on an algorithm previously applied to magnetization transfer and arterial spin labeling. The resulting optimal sampling schedule samples repeatedly around the resonance frequency of the amine pool and also near to the water resonance to maximize the information present within the data for quantitative model-based analysis. Simulation and experimental results on tissue-like phantoms showed that greater accuracy and precision (>30% and >46%, respectively, for some cases) were achieved in the parameters of interest when using optimal sampling schedule compared with evenly distributed sampling schedule. Hence, the proposed optimal sampling schedule could replace evenly distributed sampling schedule in chemical exchange saturation transfer imaging to improve the quantification of the chemical exchange saturation transfer effect and parameter estimation. PMID:23315799
Protocol optimization for long-term liquid storage of goat semen in a chemically defined extender.
Zhao, B-T; Han, D; Xu, C-L; Luo, M-J; Chang, Z-L; Tan, J-H
2009-12-01
A specific problem in the preservation of goat semen has been the detrimental effect of seminal plasma on the viability of spermatozoa in extenders containing egg yolk or milk. The use of chemically defined extenders will have obvious advantages in liquid storage of buck semen. Our previous study showed that the self-made mZAP extender performed better than commercial extenders, and maintained a sperm motility of 34% for 9 days and a fertilizing potential for successful pregnancies for 7 days. The aim of this study was to extend the viability and fertilizing potential of liquid-stored goat spermatozoa by optimizing procedures for semen processing and storage in the mZAP extender. Semen samples collected from five goat bucks of the Lubei White and Boer breeds were diluted with the extender, cooled and stored at 5 degrees C. Stored semen was evaluated for sperm viability parameters, every 48 h of storage. Data from three ejaculates of different bucks were analysed for each treatment. The percentage data were arcsine-transformed before being analysed with anova and Duncan's multiple comparison test. While cooling at the rate of 0.1-0.25 degrees C/min did not affect sperm viability parameters, doing so at the rate of 0.6 degrees C/min from 30 to 15 degrees C reduced goat sperm motility and membrane integrity. Sperm motility and membrane integrity were significantly higher in semen coated with the extender containing 20% egg yolk than in non-coated semen. Sperm motility, membrane integrity and acrosomal intactness were significantly higher when coated semen was 21-fold diluted than when it was 11- or 51-fold diluted and when extender was renewed at 48-h intervals than when it was not renewed during storage. When goat semen coated with the egg yolk-containing extender was 21-fold diluted, cooled at the rate of 0.07-0.25 degrees C/min, stored at 5 degrees C and the extender renewed every 48 h, a sperm motility of 48% was maintained for 13 days, and an in vitro
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved. PMID:27262567
Kishishita, Shohei; Katayama, Satoshi; Kodaira, Kunihiko; Takagi, Yoshinori; Matsuda, Hiroki; Okamoto, Hiroshi; Takuma, Shinya; Hirashima, Chikashi; Aoyagi, Hideki
2015-07-01
Chinese hamster ovary (CHO) cells are the most commonly used mammalian host for large-scale commercial production of therapeutic monoclonal antibodies (mAbs). Chemically defined media are currently used for CHO cell-based mAb production. An adequate supply of nutrients, especially specific amino acids, is required for cell growth and mAb production, and chemically defined fed-batch processes that support rapid cell growth, high cell density, and high levels of mAb production is still challenging. Many studies have highlighted the benefits of various media designs, supplements, and feed addition strategies in cell cultures. In the present study, we used a strategy involving optimization of a chemically defined feed medium to improve mAb production. Amino acids that were consumed in substantial amounts during a control culture were added to the feed medium as supplements. Supplementation was controlled to minimize accumulation of waste products such as lactate and ammonia. In addition, we evaluated supplementation with tyrosine, which has poor solubility, in the form of a dipeptide or tripeptide to improve its solubility. Supplementation with serine, cysteine, and tyrosine enhanced mAb production, cell viability, and metabolic profiles. A cysteine-tyrosine-serine tripeptide showed high solubility and produced beneficial effects similar to those observed with the free amino acids and with a dipeptide in improving mAb titers and metabolic profiles. PMID:25678240
Wilborn, Bill; Knapp, Kathryn; Farnham, Irene; Marutzky, Sam
2013-07-01
The Underground Test Area (UGTA) activity is responsible for assessing and evaluating the effects of the underground nuclear weapons tests on groundwater at the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), and implementing a corrective action closure strategy. The UGTA strategy is based on a combination of characterization, modeling studies, monitoring, and institutional controls (i.e., monitored natural attenuation). The closure strategy verifies through appropriate monitoring activities that contaminants of concern do not exceed the SDWA at the regulatory boundary and that adequate institutional controls are established and administered to ensure protection of the public. Other programs conducted at the NNSS supporting the environmental mission include the Routine Radiological Environmental Monitoring Program (RREMP), Waste Management, and the Infrastructure Program. Given the current programmatic and operational demands for various water-monitoring activities at the same locations, and the ever-increasing resource challenges, cooperative and collaborative approaches to conducting the work are necessary. For this reason, an integrated sampling plan is being developed by the UGTA activity to define sampling and analysis objectives, reduce duplication, eliminate unnecessary activities, and minimize costs. The sampling plan will ensure the right data sets are developed to support closure and efficient transition to long-term monitoring. The plan will include an integrated reporting mechanism for communicating results and integrating process improvements within the UGTA activity as well as between other U.S. Department of Energy (DOE) Programs. (authors)
Wilborn, Bill; Marutzky, Sam; Knapp, Kathryn
2013-02-24
The Underground Test Area (UGTA) activity is responsible for assessing and evaluating the effects of the underground nuclear weapons tests on groundwater at the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), and implementing a corrective action closure strategy. The UGTA strategy is based on a combination of characterization, modeling studies, monitoring, and institutional controls (i.e., monitored natural attenuation). The closure strategy verifies through appropriate monitoring activities that contaminants of concern do not exceed the SDWA at the regulatory boundary and that adequate institutional controls are established and administered to ensure protection of the public. Other programs conducted at the NNSS supporting the environmental mission include the Routine Radiological Environmental Monitoring Program (RREMP), Waste Management, and the Infrastructure Program. Given the current programmatic and operational demands for various water-monitoring activities at the same locations, and the ever-increasing resource challenges, cooperative and collaborative approaches to conducting the work are necessary. For this reason, an integrated sampling plan is being developed by the UGTA activity to define sampling and analysis objectives, reduce duplication, eliminate unnecessary activities, and minimize costs. The sampling plan will ensure the right data sets are developed to support closure and efficient transition to long-term monitoring. The plan will include an integrated reporting mechanism for communicating results and integrating process improvements within the UGTA activity as well as between other U.S. Department of Energy (DOE) Programs.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Optimization of protein samples for NMR using thermal shift assays.
Kozak, Sandra; Lercher, Lukas; Karanth, Megha N; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane
2016-04-01
Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor(®) provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies. PMID:26984476
Hatjimihail, Aristides T.
2009-01-01
Background An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Methodology/Principal Findings Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. Conclusions/Significance It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed. PMID:19513124
NASA Astrophysics Data System (ADS)
Lorber, K.; Czaja, A. D.
2014-12-01
Recent studies suggest that Mars contains more potentially life-supporting habitats (either in the present or past), than once thought. The key to finding life on Mars, whether extinct or extant, is to first understand which biomarkers and biosignatures are strictly biogenic in origin. Studying ancient habitats and fossil organisms of the early Earth can help to characterize potential Martian habitats and preserved life. This study, which focuses on the preservation of fossil microorganisms from the Archean Eon, aims to help define in part the science methods needed for a Mars sample return mission, of which, the Mars 2020 rover mission is the first step.Here is reported variations in the geochemical and morphological preservation of filamentous fossil microorganisms (microfossils) collected from the 2.5-billion-year-old Gamohaan Formation of the Kaapvaal Craton of South Africa. Samples of carbonaceous chert were collected from outcrop and drill core within ~1 km of each other. Specimens from each location were located within thin sections and their biologic morphologies were confirmed using confocal laser scanning microscopy. Raman spectroscopic analyses documented the carbonaceous nature of the specimens and also revealed variations in the level of geochemical preservation of the kerogen that comprises the fossils. The geochemical preservation of kerogen is principally thought to be a function of thermal alteration, but the regional geology indicates all of the specimens experienced the same thermal history. It is hypothesized that the fossils contained within the outcrop samples were altered by surface weathering, whereas the drill core samples, buried to a depth of ~250 m, were not. This differential weathering is unusual for cherts that have extremely low porosities. Through morphological and geochemical characterization of the earliest known forms of fossilized life on the earth, a greater understanding of the origin of evolution of life on Earth is gained
'Optimal thermal range' in ectotherms: Defining criteria for tests of the temperature-size-rule.
Walczyńska, Aleksandra; Kiełbasa, Anna; Sobczyk, Mateusz
2016-08-01
Thermal performance curves for population growth rate r (a measure of fitness) were estimated over a wide range of temperature for three species: Coleps hirtus (Protista), Lecane inermis (Rotifera) and Aeolosoma hemprichi (Oligochaeta). We measured individual body size and examined if predictions for the temperature-size rule (TSR) were valid for different temperatures. All three organisms investigated follow the TSR, but only over a specific range between minimal and optimal temperatures, while maintenance at temperatures beyond this range showed the opposite pattern in these taxa. We consider minimal and optimal temperatures to be species-specific, and moreover delineate a physiological range outside of which an ectotherm is constrained against displaying size plasticity in response to temperature. This thermal range concept has important implications for general size-temperature studies. Furthermore, the concept of 'operating thermal conditions' may provide a new approach to (i) defining criteria required for investigating and interpreting temperature effects, and (ii) providing a novel interpretation for many cases in which species do not conform to the TSR. PMID:27503715
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Defining an optimal surface chemistry for pluripotent stem cell culture in 2D and 3D
NASA Astrophysics Data System (ADS)
Zonca, Michael R., Jr.
Surface chemistry is critical for growing pluripotent stem cells in an undifferentiated state. There is great potential to engineer the surface chemistry at the nanoscale level to regulate stem cell adhesion. However, the challenge is to identify the optimal surface chemistry of the substrata for ES cell attachment and maintenance. Using a high-throughput polymerization and screening platform, a chemically defined, synthetic polymer grafted coating that supports strong attachment and high expansion capacity of pluripotent stem cells has been discovered using mouse embryonic stem (ES) cells as a model system. This optimal substrate, N-[3-(Dimethylamino)propyl] methacrylamide (DMAPMA) that is grafted on 2D synthetic poly(ether sulfone) (PES) membrane, sustains the self-renewal of ES cells (up to 7 passages). DMAPMA supports cell attachment of ES cells through integrin beta1 in a RGD-independent manner and is similar to another recently reported polymer surface. Next, DMAPMA has been able to be transferred to 3D by grafting to synthetic, polymeric, PES fibrous matrices through both photo-induced and plasma-induced polymerization. These 3D modified fibers exhibited higher cell proliferation and greater expression of pluripotency markers of mouse ES cells than 2D PES membranes. Our results indicated that desirable surfaces in 2D can be scaled to 3D and that both surface chemistry and structural dimension strongly influence the growth and differentiation of pluripotent stem cells. Lastly, the feasibility of incorporating DMAPMA into a widely used natural polymer, alginate, has been tested. Novel adhesive alginate hydrogels have been successfully synthesized by either direct polymerization of DMAPMA and methacrylic acid blended with alginate, or photo-induced DMAPMA polymerization on alginate nanofibrous hydrogels. In particular, DMAPMA-coated alginate hydrogels support strong ES cell attachment, exhibiting a concentration dependency of DMAPMA. This research provides a
Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.
Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V
2016-05-01
Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g
SamACO: variable sampling ant colony optimization algorithm for continuous optimization.
Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou
2010-12-01
An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409
A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics
Pikulski, M. Shiroka, T.; Ott, H.-R.; Mesot, J.
2014-09-15
We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs.
A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics.
Pikulski, M; Shiroka, T; Ott, H-R; Mesot, J
2014-09-01
We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs. PMID:25273738
Automatic optimization of metrology sampling scheme for advanced process control
NASA Astrophysics Data System (ADS)
Chue, Chuei-Fu; Huang, Chun-Yen; Shih, Chiang-Lin
2011-03-01
In order to ensure long-term profitability, driving the operational costs down and improving the yield of a DRAM manufacturing process are continuous efforts. This includes optimal utilization of the capital equipment. The costs of metrology needed to ensure yield are contributing to the overall costs. As the shrinking of device dimensions continues, the costs of metrology are increasing because of the associated tightening of the on-product specifications requiring more metrology effort. The cost-of-ownership reduction is tackled by increasing the throughput and availability of metrology systems. However, this is not the only way to reduce metrology effort. In this paper, we discuss how the costs of metrology can be improved by optimizing the recipes in terms of the sampling layout, thereby eliminating metrology that does not contribute to yield. We discuss results of sampling scheme optimization for on-product overlay control of two DRAM manufacturing processes at Nanya Technology Corporation. For a 6x DRAM production process, we show that the reduction of metrology waste can be as high as 27% and overlay can be improved by 36%, comparing with a baseline sampling scheme. For a 4x DRAM process, having tighter overlay specs, a gain of ca. 0.5nm on-product overlay could be achieved, without increasing the metrology effort relative to the original sampling plan.
Defining optimal cutoff scores for cognitive impairment using MDS Task Force PD-MCI criteria
Goldman, Jennifer G.; Holden, Samantha; Bernard, Bryan; Ouyang, Bichun; Goetz, Christopher G.; Stebbins, Glenn T.
2014-01-01
Background The recently proposed Movement Disorder Society (MDS) Task Force diagnostic criteria for mild cognitive impairment in Parkinson’s disease (PD-MCI) represent a first step towards a uniform definition of PD-MCI across multiple clinical and research settings. Several questions regarding specific criteria, however, remain unanswered including optimal cutoff scores by which to define impairment on neuropsychological tests. Methods Seventy-six non-demented PD patients underwent comprehensive neuropsychological assessment and were classified as PD-MCI or PD with normal cognition (PD-NC). Concordance of PD-MCI diagnosis by MDS Task Force Level II criteria (comprehensive assessment), using a range of standard deviation (SD) cutoff scores, was compared to our consensus diagnosis of PD-MCI or PD-NC. Sensitivity, specificity, positive and negative predictive values were examined for each cutoff score. PD-MCI subtype classification and distribution of cognitive domains impaired were evaluated. Results Concordance for PD-MCI diagnosis was greatest for defining impairment on neuropsychological tests using a 2 SD cutoff score below appropriate norms. This cutoff also provided the best discriminatory properties for separating PD-MCI from PD-NC, compared to other cutoff scores. With the MDS PD-MCI criteria, multiple domain impairment was more frequent than single domain impairment, with predominant executive function, memory, and visuospatial function deficits. Conclusions Application of the MDS Task Force PD-MCI Level II diagnostic criteria demonstrates good sensitivity and specificity at a 2 SD cutoff score. The predominance of multiple domain impairment in PD-MCI with the Level II criteria suggests not only influences of testing abnormality requirements, but also the widespread nature of cognitive deficits within PD-MCI. PMID:24123267
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-01
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717
Classifier-Guided Sampling for Complex Energy System Optimization
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
Efficient infill sampling for unconstrained robust optimization problems
NASA Astrophysics Data System (ADS)
Rehman, Samee Ur; Langelaar, Matthijs
2016-08-01
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.
Optimizing passive acoustic sampling of bats in forests.
Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K
2014-12-01
Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km(2) scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363
Defining Adult Experiences: Perspectives of a Diverse Sample of Young Adults
Lowe, Sarah R.; Dillon, Colleen O.; Rhodes, Jean E.; Zwiebach, Liza
2013-01-01
This study explored the roles and psychological experiences identified as defining adult moments using mixed methods with a racially, ethnically, and socioeconomically diverse sample of young adults both enrolled and not enrolled in college (N = 726; ages 18-35). First, we evaluated results from a single survey item that asked participants to rate how adult they feel. Consistent with previous research, the majority of participants (56.9%) reported feeling “somewhat like an adult,” and older participants had significantly higher subjective adulthood, controlling for other demographic variables. Next, we analyzed responses from an open-ended question asking participants to describe instances in which they felt like an adult. Responses covered both traditional roles (e.g., marriage, childbearing; 36.1%) and nontraditional social roles and experiences (e.g., moving out of parent’s home, cohabitation; 55.6%). Although we found no differences by age and college status in the likelihood of citing a traditional or nontraditional role, participants who had achieved more traditional roles were more likely to cite them in their responses. In addition, responses were coded for psychological experiences, including responsibility for self (19.0%), responsibility for others (15.3%), self-regulation (31.1%), and reflected appraisals (5.1%). Older participants were significantly more likely to include self-regulation and reflected appraisals, whereas younger participants were more likely to include responsibility for self. College students were more likely than noncollege students to include self-regulation and reflected appraisals. Implications for research and practice are discussed. PMID:23554545
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
ERIC Educational Resources Information Center
Brown Univ., Providence, RI. Thomas J. Watson, Jr. Inst. for International Studies.
Clearly, the United States cannot respond to every crisis, but what is meant precisely by the phrase "American interests"? How is the U.S. national interest defined and by whom? Does its definition affect the decision of how to respond to a crisis? This lesson deals with these complex and intertwined questions. By defining the national interest…
Optimized robust plasma sampling for glomerular filtration rate studies.
Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L
2012-09-01
In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement. PMID:22825040
Test samples for optimizing STORM super-resolution microscopy.
Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E
2013-01-01
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Rate-distortion optimization for compressive video sampling
NASA Astrophysics Data System (ADS)
Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee
2014-05-01
The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.
Test Samples for Optimizing STORM Super-Resolution Microscopy
Metcalf, Daniel J.; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E.
2013-01-01
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752
A General Investigation of Optimized Atmospheric Sample Duration
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.
Fixed-sample optimization using a probability density function
Barnett, R.N.; Sun, Zhiwei; Lester, W.A. Jr. |
1997-12-31
We consider the problem of optimizing parameters in a trial function that is to be used in fixed-node diffusion Monte Carlo calculations. We employ a trial function with a Boys-Handy correlation function and a one-particle basis set of high quality. By employing sample points picked from a positive definite distribution, parameters that determine the nodes of the trial function can be varied without introducing singularities into the optimization. For CH as a test system, we find that a trial function of high quality is obtained and that this trial function yields an improved fixed-node energy. This result sheds light on the important question of how to improve the nodal structure and, thereby, the accuracy of diffusion Monte Carlo.
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
NASA Astrophysics Data System (ADS)
Bostamam, Anas Muhamad; Sanada, Yukitoshi; Minami, Hideki
In this paper, a new fractional sample rate conversion (SRC) scheme based on a direct insertion/cancellation scheme is proposed. This scheme is suitable for signals that are sampled at a high sample rate and converted to a lower sample rate. The direct insertion/cancellation scheme may achieve low-complexity and lower power consumption as compared to the other SRC techniques. However, the direct insertion/cancellation technique suffers from large aliasing and distortion. The aliasing from an adjacent channel interferes the desired signal and degrades the performance. Therefore, a modified direct insertion/cancellation scheme is proposed in order to realize high performance resampling.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Optimization of Evans blue quantitation in limited rat tissue samples
NASA Astrophysics Data System (ADS)
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-01
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. PMID:26775132
Clever particle filters, sequential importance sampling and the optimal proposal
NASA Astrophysics Data System (ADS)
Snyder, Chris
2014-05-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.
Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties
NASA Astrophysics Data System (ADS)
Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens
2013-04-01
The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field
Vandermeulen, Eva; De Sadeleer, Carlos; Piepsz, Amy; Ham, Hamphrey R; Dobbeleir, André A; Vermeire, Simon T; Van Hoek, Ingrid M; Daminet, Sylvie; Slegers, Guido; Peremans, Kathelijne Y
2010-08-01
Estimation of the glomerular filtration rate (GFR) is a useful tool in the evaluation of kidney function in feline medicine. GFR can be determined by measuring the rate of tracer disappearance from the blood, and although these measurements are generally performed by multi-sampling techniques, simplified methods are more convenient in clinical practice. The optimal times for a simplified sampling strategy with two blood samples (2BS) for GFR measurement in cats using plasma (51)chromium ethylene diamine tetra-acetic acid ((51)Cr-EDTA) clearance were investigated. After intravenous administration of (51)Cr-EDTA, seven blood samples were obtained in 46 cats (19 euthyroid and 27 hyperthyroid cats, none with previously diagnosed chronic kidney disease (CKD)). The plasma clearance was then calculated from the seven point blood kinetics (7BS) and used for comparison to define the optimal sampling strategy by correlating different pairs of time points to the reference method. Mean GFR estimation for the reference method was 3.7+/-2.5 ml/min/kg (mean+/-standard deviation (SD)). Several pairs of sampling times were highly correlated with this reference method (r(2) > or = 0.980), with the best results when the first sample was taken 30 min after tracer injection and the second sample between 198 and 222 min after injection; or with the first sample at 36 min and the second at 234 or 240 min (r(2) for both combinations=0.984). Because of the similarity of GFR values obtained with the 2BS method in comparison to the values obtained with the 7BS reference method, the simplified method may offer an alternative for GFR estimation. Although a wide range of GFR values was found in the included group of cats, the applicability should be confirmed in cats suspected of renal disease and with confirmed CKD. Furthermore, although no indications of age-related effect were found in this study, a possible influence of age should be included in future studies. PMID:20452793
Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.
1996-01-01
Field and laboratory experiments indicate that a number of factors associated with filtration other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample) can produce significant variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk of these variations result from the inclusion/exclusion of colloidally associated trace elements in the filtrate, although dilution and sorption/desorption from filters also may be factors. Thus, dissolved trace element concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As such, simple filtration of unspecified volumes of natural water through unspecified 0.45-??m membrane filters may no longer represent an acceptable operational definition for a number of dissolved chemical constituents.
Optimization of the development process for air sampling filter standards
NASA Astrophysics Data System (ADS)
Mena, RaJah Marie
Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.
Homosexual, gay, and lesbian: defining the words and sampling the populations.
Donovan, J M
1992-01-01
The lack of both specificity and consensus about definitions for homosexual, homosexuality, gay, and lesbian are first shown to confound comparative research and cumulative understanding because criteria for inclusion within the subject populations are often not consistent. The Description section examines sociolinguistic variables which determine patterns of preferred choice of terminology, and considers how these might impact gay and lesbian studies. Attitudes and style are found to influence word choice. These results are used in the second section to devise recommended definitional limits which would satisfy both communication needs and methodological purposes, especially those of sampling. PMID:1299702
Optimization for Peptide Sample Preparation for Urine Peptidomics
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.
2014-02-25
when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.
2016-06-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.
Defining the Enterovirus Diversity Landscape of a Fecal Sample: A Methodological Challenge?
Faleye, Temitope Oluwasegun Cephas; Adewumi, Moses Olubusuyi; Adeniji, Johnson Adekunle
2016-01-01
Enteroviruses are a group of over 250 naked icosahedral virus serotypes that have been associated with clinical conditions that range from intrauterine enterovirus transmission withfataloutcome through encephalitis and meningitis, to paralysis. Classically, enterovirus detection was done by assaying for the development of the classic enterovirus-specific cytopathic effect in cell culture. Subsequently, the isolates were historically identified by a neutralization assay. More recently, identification has been done by reverse transcriptase-polymerase chain reaction (RT-PCR). However, in recent times, there is a move towards direct detection and identification of enteroviruses from clinical samples using the cell culture-independent RT semi-nested PCR (RT-snPCR) assay. This RT-snPCR procedure amplifies the VP1 gene, which is then sequenced and used for identification. However, while cell culture-based strategies tend to show a preponderance of certain enterovirus species depending on the cell lines included in the isolation protocol, the RT-snPCR strategies tilt in a different direction. Consequently, it is becoming apparent that the diversity observed in certain enterovirus species, e.g., enterovirus species B(EV-B), might not be because they are the most evolutionarily successful. Rather, it might stem from cell line-specific bias accumulated over several years of use of the cell culture-dependent isolation protocols. Furthermore, it might also be a reflection of the impact of the relative genome concentration on the result of pan-enterovirus VP1 RT-snPCR screens used during the identification of cell culture isolates. This review highlights the impact of these two processes on the current diversity landscape of enteroviruses and the need to re-assess enterovirus detection and identification algorithms in a bid to better balance our understanding of the enterovirus diversity landscape. PMID:26771630
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Defining the optimal animal model for translational research using gene set enrichment analysis.
Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert
2016-01-01
The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. PMID:27311961
Defining the face processing network: optimization of the functional localizer in fMRI.
Fox, Christopher J; Iaria, Giuseppe; Barton, Jason J S
2009-05-01
Functional localizers that contrast brain signal when viewing faces versus objects are commonly used in functional magnetic resonance imaging studies of face processing. However, current protocols do not reliably show all regions of the core system for face processing in all subjects when conservative statistical thresholds are used, which is problematic in the study of single subjects. Furthermore, arbitrary variations in the applied thresholds are associated with inconsistent estimates of the size of face-selective regions-of-interest (ROIs). We hypothesized that the use of more natural dynamic facial images in localizers might increase the likelihood of identifying face-selective ROIs in individual subjects, and we also investigated the use of a method to derive the statistically optimal ROI cluster size independent of thresholds. We found that dynamic facial stimuli were more effective than static stimuli, identifying 98% (versus 72% for static) of ROIs in the core face processing system and 69% (versus 39% for static) of ROIs in the extended face processing system. We then determined for each core face processing ROI, the cluster size associated with maximum statistical face-selectivity, which on average was approximately 50 mm(3) for the fusiform face area, the occipital face area, and the posterior superior temporal sulcus. We suggest that the combination of (a) more robust face-related activity induced by a dynamic face localizer and (b) a cluster-size determination based on maximum face-selectivity increases both the sensitivity and the specificity of the characterization of face-related ROIs in individual subjects. PMID:18661501
Intentional sampling by goal optimization with decoupling by stochastic perturbation
NASA Astrophysics Data System (ADS)
Lauretto, Marcelo De Souza; Nakano, Fábio; Pereira, Carlos Alberto de Bragança; Stern, Julio Michael
2012-10-01
Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications.
Wang, Christopher J; Feng, Szi Fei; Duncan, Paul
2014-01-01
The application of next-generation sequencing (also known as deep sequencing or massively parallel sequencing) for adventitious agent detection is an evolving field that is steadily gaining acceptance in the biopharmaceutical industry. In order for this technology to be successfully applied, a robust method that can isolate viral nucleic acids from a variety of biological samples (such as host cell substrates, cell-free culture fluids, viral vaccine harvests, and animal-derived raw materials) must be established by demonstrating recovery of model virus spikes. In this report, we implement the sample preparation workflow developed by Feng et. al. and assess the sensitivity of virus detection in a next-generation sequencing readout using the Illumina MiSeq platform. We describe a theoretical model to estimate the detection of a target virus in a cell lysate or viral vaccine harvest sample. We show that nuclease treatment can be used for samples that contain a high background of non-relevant nucleic acids (e.g., host cell DNA) in order to effectively increase the sensitivity of sequencing target viruses and reduce the complexity of data analysis. Finally, we demonstrate that at defined spike levels, nucleic acids from a panel of model viruses spiked into representative cell lysate and viral vaccine harvest samples can be confidently recovered by next-generation sequencing. PMID:25475632
NASA Astrophysics Data System (ADS)
Taylor, Ted L.; Makimura, Eri
2007-03-01
Micron Technology, Inc., explores the challenges of defining specific wafer sampling scenarios for users of multiple integrated metrology modules within a Tokyo Electron Limited (TEL) CLEAN TRACK TM LITHIUS TM. With the introduction of integrated metrology (IM) into the photolithography coater/developer, users are faced with the challenge of determining what type of data is required to collect to adequately monitor the photolithography tools and the manufacturing process. Photolithography coaters/developers have a metrology block that is capable of integrating three metrology modules into the standard wafer flow. Taking into account the complexity of multiple metrology modules and varying across-wafer sampling plans per metrology module, users must optimize the module wafer sampling to obtain their desired goals. Users must also understand the complexity of the coater/developer handling systems to deliver wafers to each module. Coater/developer systems typically process wafers sequentially through each module to ensure consistent processing. In these systems, the first wafer must process through a module before the next wafer can process through a module, and the first wafer must return to the cassette before the second wafer can return to the cassette. IM modules within this type of system can reduce throughput and limit flexible wafer selections. Finally, users must have the ability to select specific wafer samplings for each IM module. This case study explores how to optimize wafer sampling plans and how to identify limitations with the complexity of multiple integrated modules to ensure maximum metrology throughput without impact to the productivity of processing wafers through the photolithography cell (litho cell).
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
Analysis and optimization of bulk DNA sampling with binary scoring for germplasm characterization.
Reyes-Valdés, M Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization
Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Bhalala, Utpal S; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A G M; Allen, Robert H; Acharya, Soumyadipta
2016-01-01
Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0-28 days) and young infants (age: 29 days-4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144-150°. PMID:27003759
Bhalala, Utpal S.; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A. G. M.; Allen, Robert H.; Acharya, Soumyadipta
2016-01-01
Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0–28 days) and young infants (age: 29 days–4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144–150°. PMID:27003759
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Brunelli, Davide; Caione, Carlo
2015-01-01
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203
Khanlou, Khosro Mehdi; Vandepitte, Katrien; Asl, Leila Kheibarshekan; Van Bockstaele, Erik
2011-04-01
Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci) from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He) and Shannon diversity index (I) were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation. PMID:21734826
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
A Sample Time Optimization Problem in a Digital Control System
NASA Astrophysics Data System (ADS)
Mitkowski, Wojciech; Oprzędkiewicz, Krzysztof
In the paper a phenomenon of the existence of a sample time minimizing the settling time in a digital control system is described. As a control plant an experimental heat object was used. The control system was built with the use of a soft PLC system SIEMENS SIMATIC. As the control algorithm a finite dimensional dynamic compensator was applied. During tests of the control system it was observed that there exists a value of the sample time which minimizes the settling time in the system. This phenomenon is tried to explain.
Ant colony optimization as a method for strategic genotype sampling.
Spangler, M L; Robbins, K R; Bertrand, J K; Macneil, M; Rekaya, R
2009-06-01
A simulation study was carried out to develop an alternative method of selecting animals to be genotyped. Simulated pedigrees included 5000 animals, each assigned genotypes for a bi-allelic single nucleotide polymorphism (SNP) based on assumed allelic frequencies of 0.7/0.3 and 0.5/0.5. In addition to simulated pedigrees, two beef cattle pedigrees, one from field data and the other from a research population, were used to test selected methods using simulated genotypes. The proposed method of ant colony optimization (ACO) was evaluated based on the number of alleles correctly assigned to ungenotyped animals (AK(P)), the probability of assigning true alleles (AK(G)) and the probability of correctly assigning genotypes (APTG). The proposed animal selection method of ant colony optimization was compared to selection using the diagonal elements of the inverse of the relationship matrix (A(-1)). Comparisons of these two methods showed that ACO yielded an increase in AK(P) ranging from 4.98% to 5.16% and an increase in APTG from 1.6% to 1.8% using simulated pedigrees. Gains in field data and research pedigrees were slightly lower. These results suggest that ACO can provide a better genotyping strategy, when compared to A(-1), with different pedigree sizes and structures. PMID:19220227
Optimization of strawberry volatile sampling by odor representativeness
Technology Transfer Automated Retrieval System (TEKTRAN)
The aim of this work was to choose a suitable sampling headspace technique to study 'Festival' aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...
Optimization of strawberry volatile sampling by direct gas chromatography olfactometry
Technology Transfer Automated Retrieval System (TEKTRAN)
The aim of this work was to choose a suitable sampling headspace technique to study ‘Festival’ aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Determination and optimization of spatial samples for distributed measurements.
Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong
2010-10-01
There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.
Optimizing analog-to-digital converters for sampling extracellular potentials.
Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan
2012-01-01
In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy. PMID:23366227
Statistically optimal analysis of samples from multiple equilibrium states
Shirts, Michael R.; Chodera, John D.
2008-01-01
We present a new estimator for computing free energy differences and thermodynamic expectations as well as their uncertainties from samples obtained from multiple equilibrium states via either simulation or experiment. The estimator, which we call the multistate Bennett acceptance ratio estimator (MBAR) because it reduces to the Bennett acceptance ratio estimator (BAR) when only two states are considered, has significant advantages over multiple histogram reweighting methods for combining data from multiple states. It does not require the sampled energy range to be discretized to produce histograms, eliminating bias due to energy binning and significantly reducing the time complexity of computing a solution to the estimating equations in many cases. Additionally, an estimate of the statistical uncertainty is provided for all estimated quantities. In the large sample limit, MBAR is unbiased and has the lowest variance of any known estimator for making use of equilibrium data collected from multiple states. We illustrate this method by producing a highly precise estimate of the potential of mean force for a DNA hairpin system, combining data from multiple optical tweezer measurements under constant force bias. PMID:19045004
[Study on the optimization methods of common-batch identification of amphetamine samples].
Zhang, Jianxin; Zhang, Daming
2008-07-01
The essay introduced the technology of amphetamine identification and its optimization method. Impurity profiling of amphetamine was analyzed by GC-MS. Identification of common-batch amphetamine samples could be successfully finished by the data transition and pre-treating of the peak areas. The analytical method was improved by optimizing the techniques of sample extraction, gas chromatograph, sample separation and detection. PMID:18839544
Optimized Sampling Strategies For Non-Proliferation Monitoring: Report
Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.
2015-10-20
Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.
Optimal sampling of visual information for lightness judgments.
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R
2013-07-01
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251
Optimizing fish sampling for fish-mercury bioaccumulation factors.
Scudder Eikenberry, Barbara C; Riva-Murray, Karen; Knightes, Christopher D; Journey, Celeste A; Chasar, Lia C; Brigham, Mark E; Bradley, Paul M
2015-09-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop total maximum daily load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish-sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements. PMID:25592462
Optimal sampling of visual information for lightness judgments
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2013-01-01
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Fast and Statistically Optimal Period Search in Uneven Sampled Observations
NASA Astrophysics Data System (ADS)
Schwarzenberg-Czerny, A.
1996-04-01
The classical methods for searching for a periodicity in uneven sampled observations suffer from a poor match of the model and true signals and/or use of a statistic with poor properties. We present a new method employing periodic orthogonal polynomials to fit the observations and the analysis of variance (ANOVA) statistic to evaluate the quality of the fit. The orthogonal polynomials constitute a flexible and numerically efficient model of the observations. Among all popular statistics, ANOVA has optimum detection properties as the uniformly most powerful test. Our recurrence algorithm for expansion of the observations into the orthogonal polynomials is fast and numerically stable. The expansion is equivalent to an expansion into Fourier series. Aside from its use of an inefficient statistic, the Lomb-Scargle power spectrum can be considered a special case of our method. Tests of our new method on simulated and real light curves of nonsinusoidal pulsators demonstrate its excellent performance. In particular, dramatic improvements are gained in detection sensitivity and in the damping of alias periods.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Yin, Jingjing; Samawi, Hani; Linder, Daniel
2016-07-01
A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136
Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873
Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873
Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P
2015-07-01
Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates. PMID:25454561
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs. PMID:26316105
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
Optimal sampling efficiency in Monte Carlo sampling with an approximate potential
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Kirkpatrick, John P.; Wang, Zhiheng; Sampson, John H.; McSherry, Frances; Herndon, James E.; Allen, Karen J.; Duffy, Eileen; Hoang, Jenny K.; Chang, Zheng; Yoo, David S.; Kelsey, Chris R.; Yin, Fang-Fang
2015-01-01
Purpose: To identify an optimal margin about the gross target volume (GTV) for stereotactic radiosurgery (SRS) of brain metastases, minimizing toxicity and local recurrence. Methods and Materials: Adult patients with 1 to 3 brain metastases less than 4 cm in greatest dimension, no previous brain radiation therapy, and Karnofsky performance status (KPS) above 70 were eligible for this institutional review board–approved trial. Individual lesions were randomized to 1- or 3- mm uniform expansion of the GTV defined on contrast-enhanced magnetic resonance imaging (MRI). The resulting planning target volume (PTV) was treated to 24, 18, or 15 Gy marginal dose for maximum PTV diameters less than 2, 2 to 2.9, and 3 to 3.9 cm, respectively, using a linear accelerator–based image-guided system. The primary endpoint was local recurrence (LR). Secondary endpoints included neurocognition Mini-Mental State Examination, Trail Making Test Parts A and B, quality of life (Functional Assessment of Cancer Therapy-Brain), radionecrosis (RN), need for salvage radiation therapy, distant failure (DF) in the brain, and overall survival (OS). Results: Between February 2010 and November 2012, 49 patients with 80 brain metastases were treated. The median age was 61 years, the median KPS was 90, and the predominant histologies were non–small cell lung cancer (25 patients) and melanoma (8). Fifty-five, 19, and 6 lesions were treated to 24, 18, and 15 Gy, respectively. The PTV/GTV ratio, volume receiving 12 Gy or more, and minimum dose to PTV were significantly higher in the 3-mm group (all P<.01), and GTV was similar (P=.76). At a median follow-up time of 32.2 months, 11 patients were alive, with median OS 10.6 months. LR was observed in only 3 lesions (2 in the 1 mm group, P=.51), with 6.7% LR 12 months after SRS. Biopsy-proven RN alone was observed in 6 lesions (5 in the 3-mm group, P=.10). The 12-month DF rate was 45.7%. Three months after SRS, no significant change in
Protocol for optimal quality and quantity pollen DNA isolation from honey samples.
Lalhmangaihi, Ralte; Ghatak, Souvik; Laha, Ramachandra; Gurusubramanian, Guruswami; Kumar, Nachimuthu Senthil
2014-12-01
The present study illustrates an optimized sample preparation method for an efficient DNA isolation from low quantities of honey samples. A conventional PCR-based method was validated, which potentially enables characterization of plant species from as low as 3 ml bee-honey samples. In the present study, an anionic detergent was used to lyse the hard outer pollen shell, and DTT was used for isolation of thiolated DNA, as it might facilitate protein digestion and assists in releasing the DNA into solution, as well as reduce cross-links between DNA and other biomolecules. Optimization of both the quantity of honey sample and time duration for DNA isolation was done during development of this method. With the use of this method, chloroplast DNA was successfully PCR amplified and sequenced from honey DNA samples. PMID:25365793
Optimal number of samples to test for institutional respiratory infection outbreaks in Ontario.
Peci, A; Marchand-Austin, A; Winter, A-L; Winter, A-J; Gubbay, J B
2013-08-01
The objective of this study was to determine the optimal number of respiratory samples per outbreak to be tested for institutional respiratory outbreaks in Ontario. We reviewed respiratory samples tested for respiratory viruses by multiplex PCR as part of outbreak investigations. We documented outbreaks that were positive for any respiratory viruses and for influenza alone. At least one virus was detected in 1454 (85∙2%) outbreaks. The ability to detect influenza or any respiratory virus increased as the number of samples tested increased. When analysed by chronological order of when samples were received at the laboratory, percent positivity of outbreaks testing positive for any respiratory virus including influenza increased with the number of samples tested up to the ninth sample, with minimal benefit beyond the fourth sample tested. Testing up to four respiratory samples per outbreak was sufficient to detect viral organisms and resulted in significant savings for outbreak investigations. PMID:23146341
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
NASA Astrophysics Data System (ADS)
Bisceglia, E.; Cubizolles, M.; Mallard, F.; Pineda, F.; Francais, O.; Le Pioufle, B.
2013-05-01
Sample preparation is a key issue of modern analytical methods for in vitro diagnostics of diseases with microbiological origins: methods to separate bacteria from other elements of the complex biological samples are of great importance. In the present study, we investigated the DEP force as a way to perform such a de-complexification of the sample by extracting micro-organisms from a complex biological sample under a highly non-uniform electric field in a micro-system based on an interdigitated electrodes array. Different parameters were investigated to optimize the capture efficiency, such as the size of the gap between the electrodes and the height of the capture channel. These parameters are decisive for the distribution of the electric field inside the separation chamber. To optimize these relevant parameters, we performed numerical simulations using COMSOL Multiphysics and correlated them with experimental results. The optimization of the capture efficiency of the device has first been tested on micro-organisms solution but was also investigated on human blood samples spiked with micro-organisms, thereby mimicking real biological samples.
Shen, Xiong; Zong, Chao; Zhang, Guoqiang
2012-01-01
Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.
Ejnik, J W; Hamilton, M M; Adams, P R; Carmichael, A J
2000-12-15
Kinetic phosphorescence analysis (KPA) is a proven technique for rapid, precise, and accurate determination of uranium in aqueous solutions. Uranium analysis of biological samples require dry-ashing in a muffle furnace between 400 and 600 degrees C followed by wet-ashing with concentrated nitric acid and hydrogen peroxide to digest the organic component in the sample that interferes with uranium determination by KPA. The optimal dry-ashing temperature was determined to be 450 degrees C. At dry-ashing temperatures greater than 450 degrees C, uranium loss was attributed to vaporization. High temperatures also caused increased background values that were attributed to uranium leaching from the glass vials. Dry-ashing temperatures less than 450 degrees C result in the samples needing additional wet-ashing steps. The recovery of uranium in urine samples was 99.2+/-4.02% between spiked concentrations of 1.98-1980 ng (0.198-198 microg l(-1)) uranium, whereas the recovery in whole blood was 89.9+/-7.33% between the same spiked concentrations. The limit of quantification in which uranium in urine and blood could be accurately measured above the background was determined to be 0.05 and 0.6 microg l(-1), respectively. PMID:11130202
Optimization of low-background alpha spectrometers for analysis of thick samples.
Misiaszek, M; Pelczar, K; Wójcik, M; Zuzel, G; Laubenstein, M
2013-11-01
Results of alpha spectrometric measurements performed deep underground and above ground with and without active veto show that the underground measurement of thick samples is the most sensitive method due to significant reduction of the muon-induced background. In addition, the polonium diffusion requires for some samples an appropriate selection of an energy region in the registered spectrum. On the basis of computer simulations the best counting conditions are selected for a thick lead sample in order to optimize the detection limit. PMID:23628514
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
NASA Astrophysics Data System (ADS)
Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal
Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.
Sampling design optimization for multivariate soil mapping, case study from Hungary
NASA Astrophysics Data System (ADS)
Szatmári, Gábor; Pásztor, László; Barta, Károly
2014-05-01
Direct observations of the soil are important for two main reasons in Digital Soil Mapping (DSM). First, they are used to characterize the relationship between the soil property of interest and the auxiliary information. Second, they are used to improve the predictions based on the auxiliary information. Hence there is a strong necessity to elaborate a well-established soil sampling strategy based on geostatistical tools, prior knowledge and available resources before the samples are actually collected from the area of interest. Fieldwork and laboratory analyses are the most expensive and labor-intensive part of DSM, meanwhile the collected samples and the measured data have a remarkable influence on the spatial predictions and their uncertainty. Numerous sampling strategy optimization techniques developed in the past decades. One of these optimization techniques is Spatial Simulated Annealing (SSA) that has been frequently used in soil surveys to minimize the average universal kriging variance. The benefit of the technique is, that the surveyor can optimize the sampling design for fixed number of observations taking auxiliary information, previously collected samples and inaccessible areas into account. The requirements are the known form of the regression model and the spatial structure of the residuals of the model. Another restriction is, that the technique is able to optimize the sampling design for just one target soil variable. However, in practice a soil survey usually aims to describe the spatial distribution of not just one but several pedological variables. In the recent paper we present a procedure developed in R-code to simultaneously optimize the sampling design by SSA for two soil variables using spatially averaged universal kriging variance as optimization criterion. Soil Organic Matter (SOM) content and rooting depth were chosen for this purpose. The methodology is illustrated with a legacy data set from a study area in Central Hungary. Legacy soil
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
A method to optimize sampling locations for measuring indoor air distributions
NASA Astrophysics Data System (ADS)
Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan
2015-02-01
Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.
Optimizing Spatio-Temporal Sampling Designs of Synchronous, Static, or Clustered Measurements
NASA Astrophysics Data System (ADS)
Helle, Kristina; Pebesma, Edzer
2010-05-01
When sampling spatio-temporal random variables, the cost of a measurement may differ according to the setup of the whole sampling design: static measurements, i.e. repeated measurements at the same location, synchronous measurements or clustered measurements may be cheaper per measurement than completely individual sampling. Such "grouped" measurements may however not be as good as individually chosen ones because of redundancy. Often, the overall cost rather than the total number of measurements is fixed. A sampling design with grouped measurements may allow for a larger number of measurements thus outweighing the drawback of redundancy. The focus of this paper is to include the tradeoff between the number of measurements and the freedom of their location in sampling design optimisation. For simple cases, optimal sampling designs may be fully determined. To predict e.g. the mean over a spatio-temporal field having known covariance, the optimal sampling design often is a grid with density determined by the sampling costs [1, Ch. 15]. For arbitrary objective functions sampling designs can be optimised relocating single measurements, e.g. by Spatial Simulated Annealing [2]. However, this does not allow to take advantage of lower costs when using grouped measurements. We introduce a heuristic that optimises an arbitrary objective function of sampling designs, including static, synchronous, or clustered measurements, to obtain better results at a given sampling budget. Given the cost for a measurement, either within a group or individually, the algorithm first computes affordable sampling design configurations. The number of individual measurements as well as kind and number of grouped measurements are determined. Random locations and dates are assigned to the measurements. Spatial Simulated Annealing is used on each of these initial sampling designs (in parallel) to improve them. In grouped measurements either the whole group is moved or single measurements within the
Jauzein, Cécile; Fricke, Anna; Mangialajo, Luisa; Lemée, Rodolphe
2016-06-15
In the framework of monitoring of benthic harmful algal blooms (BHABs), the most commonly reported sampling strategy is based on the collection of macrophytes. However, this methodology has some inherent problems. A potential alternative method uses artificial substrates that collect resuspended benthic cells. The current study defines main improvements in this technique, through the use of fiberglass screens during a bloom of Ostreopsis cf. ovata. A novel set-up for the deployment of artificial substrates in the field was tested, using an easy clip-in system that helped restrain substrates perpendicular to the water flow. An experiment was run in order to compare the cell collection efficiency of different mesh sizes of fiberglass screens and results suggested an optimal porosity of 1-3mm. The present study goes further on showing artificial substrates, such as fiberglass screens, as efficient tools for the monitoring and mitigation of BHABs. PMID:27048690
Optimizing Diagnostic Yield for EUS-Guided Sampling of Solid Pancreatic Lesions: A Technical Review
Weston, Brian R.
2013-01-01
Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has a higher diagnostic accuracy for pancreatic cancer than other techniques. This article will review the current advances and considerations for optimizing diagnostic yield for EUS-guided sampling of solid pancreatic lesions. Preprocedural considerations include patient history, confirmation of appropriate indication, review of imaging, method of sedation, experience required by the endoscopist, and access to rapid on-site cytologic evaluation. New EUS imaging techniques that may assist with differential diagnoses include contrast-enhanced harmonic EUS, EUS elastography, and EUS spectrum analysis. FNA techniques vary, and multiple FNA needles are now commercially available; however, neither techniques nor available FNA needles have been definitively compared. The need for suction depends on the lesion, and the need for a stylet is equivocal. No definitive endosonographic finding can predict the optimal number of passes for diagnostic yield. Preparation of good smears and communication with the cytopathologist are essential to optimize yield. PMID:23935542
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no
Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame
NASA Technical Reports Server (NTRS)
Le Bail, Karine; Gordon, David
2010-01-01
Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples
Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.
Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-01
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
Sampling optimization, at site scale, in contamination monitoring with moss, pine and oak.
Aboal, J R; Fernández, J A; Carballeira, A
2001-01-01
With the aim of optimizing protocols for sampling moss, pine and oak for biomonitoring of atmospheric contamination and also for inclusion in an Environmental Specimen Bank, 50 sampling units of each species were collected from the study area for individual analysis. Levels of Ca, Cu, Fe, Hg, Ni, and Zn in the plants were determined and the distributions of the concentrations studied. In moss samples, the concentrations of Cu, Ni and Zn, considered to be trace pollutants in this species, showed highly variable long-normal distributions; in pine and oak samples only Ni concentrations were log-normally distributed. In addition to analytical error, the two main source of error found to be associated with making a collective sample were: (1) not carrying out measurements on individual sampling units; and (2) the number of sampling units collected and the corresponding sources of variation (microspatial, age and interindividual). We recommend that a minimum of 30 sampling units are collected when contamination is suspected. PMID:11706804
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
JR Bontha; GR Golcar; N Hannigan
2000-08-29
The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.
Hasan, S M Mahmudul; Rancourt, Samantha N; Austin, Mark W; Ploughman, Michelle
2016-01-01
Although poststroke aerobic exercise (AE) increases markers of neuroplasticity and protects perilesional tissue, the degree to which it enhances complex motor or cognitive outcomes is unknown. Previous research suggests that timing and dosage of exercise may be important. We synthesized data from clinical and animal studies in order to determine optimal AE training parameters and recovery outcomes for future research. Using predefined criteria, we included clinical trials of stroke of any type or duration and animal studies employing any established models of stroke. Of the 5,259 titles returned, 52 articles met our criteria, measuring the effects of AE on balance, lower extremity coordination, upper limb motor skills, learning, processing speed, memory, and executive function. We found that early-initiated low-to-moderate intensity AE improved locomotor coordination in rodents. In clinical trials, AE improved balance and lower limb coordination irrespective of intervention modality or parameter. In contrast, fine upper limb recovery was relatively resistant to AE. In terms of cognitive outcomes, poststroke AE in animals improved memory and learning, except when training was too intense. However, in clinical trials, combined training protocols more consistently improved cognition. We noted a paucity of studies examining the benefits of AE on recovery beyond cessation of the intervention. PMID:26881101
Hasan, S. M. Mahmudul; Rancourt, Samantha N.; Austin, Mark W.; Ploughman, Michelle
2016-01-01
Although poststroke aerobic exercise (AE) increases markers of neuroplasticity and protects perilesional tissue, the degree to which it enhances complex motor or cognitive outcomes is unknown. Previous research suggests that timing and dosage of exercise may be important. We synthesized data from clinical and animal studies in order to determine optimal AE training parameters and recovery outcomes for future research. Using predefined criteria, we included clinical trials of stroke of any type or duration and animal studies employing any established models of stroke. Of the 5,259 titles returned, 52 articles met our criteria, measuring the effects of AE on balance, lower extremity coordination, upper limb motor skills, learning, processing speed, memory, and executive function. We found that early-initiated low-to-moderate intensity AE improved locomotor coordination in rodents. In clinical trials, AE improved balance and lower limb coordination irrespective of intervention modality or parameter. In contrast, fine upper limb recovery was relatively resistant to AE. In terms of cognitive outcomes, poststroke AE in animals improved memory and learning, except when training was too intense. However, in clinical trials, combined training protocols more consistently improved cognition. We noted a paucity of studies examining the benefits of AE on recovery beyond cessation of the intervention. PMID:26881101
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Quality Control Methods for Optimal BCR-ABL1 Clinical Testing in Human Whole Blood Samples
Stanoszek, Lauren M.; Crawford, Erin L.; Blomquist, Thomas M.; Warns, Jessica A.; Willey, Paige F.S.; Willey, James C.
2014-01-01
Reliable breakpoint cluster region (BCR)–Abelson (ABL) 1 measurement is essential for optimal management of chronic myelogenous leukemia. There is a need to optimize quality control, sensitivity, and reliability of methods used to measure a major molecular response and/or treatment failure. The effects of room temperature storage time, different primers, and RNA input in the reverse transcription (RT) reaction on BCR-ABL1 and β-glucuronidase (GUSB) cDNA yield were assessed in whole blood samples mixed with K562 cells. BCR-ABL1 was measured relative to GUSB to control for sample loading, and each gene was measured relative to known numbers of respective internal standard molecules to control for variation in quality and quantity of reagents, thermal cycler conditions, and presence of PCR inhibitors. Clinical sample and reference material measurements with this test were concordant with results reported by other laboratories. BCR-ABL1 per 103 GUSB values were significantly reduced (P = 0.004) after 48-hour storage. Gene-specific primers yielded more BCR-ABL1 cDNA than random hexamers at each RNA input. In addition, increasing RNA inhibited the RT reaction with random hexamers but not with gene-specific primers. Consequently, the yield of BCR-ABL1 was higher with gene-specific RT primers at all RNA inputs tested, increasing to as much as 158-fold. We conclude that optimal measurement of BCR-ABL1 per 103 GUSB in whole blood is obtained when gene-specific primers are used in RT and samples are analyzed within 24 hours after blood collection. PMID:23541592
Dynamic reconstruction of sub-sampled data using Optimal Mode Decomposition
NASA Astrophysics Data System (ADS)
Krol, Jakub; Wynn, Andrew
2015-11-01
The Nyquist-Shannon criterion indicates the sample rate necessary to identify information with particular frequency content from a dynamical system. However, in experimental applications such as the interrogation of a flow field using Particle Image Velocimetry (PIV), it may be expensive to obtain data at the desired temporal resolution. To address this problem, we propose a new approach to identify temporal information from undersampled data, using ideas from modal decomposition algorithms such as Dynamic Mode Decomposition (DMD) and Optimal Mode Decomposition (OMD). The novel method takes a vector-valued signal sampled at random time instances (but at Sub-Nyquist rate) and projects onto a low-order subspace. Subsequently, dynamical characteristics are approximated by iteratively approximating the flow evolution by a low order model and solving a certain convex optimization problem. Furthermore, it is shown that constraints may be added to the optimization problem to improve spatial resolution of missing data points. The methodology is demonstrated on two dynamical systems, a cylinder flow at Re = 60 and Kuramoto-Sivashinsky equation. In both cases the algorithm correctly identifies the characteristic frequencies and oscillatory structures present in the flow.
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2015-12-01
Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Sampling plan optimization for detection of lithography and etch CD process excursions
NASA Astrophysics Data System (ADS)
Elliott, Richard C.; Nurani, Raman K.; Lee, Sung Jin; Ortiz, Luis G.; Preil, Moshe E.; Shanthikumar, J. G.; Riley, Trina; Goodwin, Greg A.
2000-06-01
Effective sample planning requires a careful combination of statistical analysis and lithography engineering. In this paper, we present a complete sample planning methodology including baseline process characterization, determination of the dominant excursion mechanisms, and selection of sampling plans and control procedures to effectively detect the yield- limiting excursions with a minimum of added cost. We discuss the results of our novel method in identifying critical dimension (CD) process excursions and present several examples of poly gate Photo and Etch CD excursion signatures. Using these results in a Sample Planning model, we determine the optimal sample plan and statistical process control (SPC) chart metrics and limits for detecting these excursions. The key observations are that there are many different yield- limiting excursion signatures in photo and etch, and that a given photo excursion signature turns into a different excursion signature at etch with different yield and performance impact. In particular, field-to-field variance excursions are shown to have a significant impact on yield. We show how current sampling plan and monitoring schemes miss these excursions and suggest an improved procedure for effective detection of CD process excursions.
Model reduction algorithms for optimal control and importance sampling of diffusions
NASA Astrophysics Data System (ADS)
Hartmann, Carsten; Schütte, Christof; Zhang, Wei
2016-08-01
We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.
Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
An S/H circuit with parasitics optimized for IF-sampling
NASA Astrophysics Data System (ADS)
Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue
2016-06-01
An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).
Optimization of Sample Site Selection Imaging for OSIRIS-REx Using Asteroid Surface Analog Images
NASA Astrophysics Data System (ADS)
Tanquary, Hannah E.; Sahr, Eric; Habib, Namrah; Hawley, Christopher; Weber, Nathan; Boynton, William V.; Kinney-Spano, Ellyne; Lauretta, Dante
2014-11-01
OSIRIS-REx will return a sample of regolith from the surface of asteroid 101955 Bennu. The mission will obtain high resolution images of the asteroid in order to create detailed maps which will satisfy multiple mission objectives. To select a site, we must (i) identify hazards to the spacecraft and (ii) characterize a number of candidate sites to determine the optimal location for sampling. To further characterize the site, a long-term science campaign will be undertaken to constrain the geologic properties. To satisfy these objectives, the distribution and size of blocks at the sample site and backup sample site must be determined. This will be accomplished through the creation of rock size frequency distribution maps. The primary goal of this study is to optimize the creation of these map products by assessing techniques for counting blocks on small bodies, and assessing the methods of analysis of the resulting data. We have produced a series of simulated surfaces of Bennu which have been imaged, and the images processed to simulate Polycam images during the Reconnaissance phase. These surface analog images allow us to explore a wide range of imaging conditions, both ideal and non-ideal. The images have been “degraded”, and are displayed as thumbnails representing the limits of Polycam resolution from an altitude of 225 meters. Specifically, this study addresses the mission requirement that the rock size frequency distribution of regolith grains < 2cm in longest dimension must be determined for the sample sites during Reconnaissance. To address this requirement, we focus on the range of available lighting angles. Varying illumination and phase angles in the simulated images, we can compare the size-frequency distributions calculated from the degraded images with the known size frequency distributions of the Bennu simulant material, and thus determine the optimum lighting conditions for satisfying the 2 cm requirement.
Quality assessment and optimization of purified protein samples: why and how?
Raynal, Bertrand; Lenormand, Pascal; Baron, Bruno; Hoos, Sylviane; England, Patrick
2014-01-01
Purified protein quality control is the final and critical check-point of any protein production process. Unfortunately, it is too often overlooked and performed hastily, resulting in irreproducible and misleading observations in downstream applications. In this review, we aim at proposing a simple-to-follow workflow based on an ensemble of widely available physico-chemical technologies, to assess sequentially the essential properties of any protein sample: purity and integrity, homogeneity and activity. Approaches are then suggested to optimize the homogeneity, time-stability and storage conditions of purified protein preparations, as well as methods to rapidly evaluate their reproducibility and lot-to-lot consistency. PMID:25547134
Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System
NASA Astrophysics Data System (ADS)
Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.
2011-09-01
Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
Optimization of multi-channel neutron focusing guides for extreme sample environments
NASA Astrophysics Data System (ADS)
Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.
2014-07-01
In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.
Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang
2016-08-01
Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA. PMID:26988503
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
Optimization of a pre-MEKC separation SPE procedure for steroid molecules in human urine samples.
Olędzka, Ilona; Kowalski, Piotr; Dziomba, Szymon; Szmudanowski, Piotr; Bączek, Tomasz
2013-01-01
Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE) with dichloromethane and compared to solid phase extraction (SPE) with C18 and hydrophilic-lipophilic balance (HLB) columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK) technique was employed. For full separation of all the analytes a running buffer (pH 9.2), composed of 10 mM sodium tetraborate decahydrate (borax), 50 mM sodium dodecyl sulfate (SDS), and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers--both men and women (students, amateur bodybuilders, using and not applying steroid doping). The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples. PMID:24232737
Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.
Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène
2016-06-01
Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. PMID:26998570
Karoly, Paul; Jung Mun, Chung; Okun, Morris
2015-03-30
Patterns of problematic volitional control in schizotypal personality disorder pertaining to goal process representation (GPR), approach and avoidance temperament, and aberrant salience have not been widely investigated in emerging adults. The present study aimed to provide preliminary evidence for the utility of examining these three motivational constructs as predictors of high versus low levels of psychometrically-defined schizotypy in a non-clinic sample. When college students with high levels of self-reported schizotypy (n = 88) were compared to those with low levels (n = 87) by means of logistic regression, aberrant salience, avoidant temperament, and the self-criticism component of GPR together accounted for 51% of the variance in schizotypy group assignment. Higher score on these three motivational dimensions reflected a proclivity toward higher levels of schizotypy. The current findings justify the continued exploration of goal-related constructs as useful motivational elements in psychopathology research. PMID:25638536
Pianciola, L; Mazzeo, M; Flores, D; Hozbor, D
2010-01-01
Pertussis or whooping cough is an acute, highly contagious respiratory infection, which is particularly severe in infants under one year old. In classic disease, clinical diagnosis may present no difficulties. In other cases, it requires laboratory confirmation. Generally used methods are: culture, serology and PCR. For the latter, the sample of choice is a nasopharyngeal aspirate, and the simplest method for processing these samples uses proteinase K. Although results are generally satisfactory, difficulties often arise regarding the mucosal nature of the specimens. Moreover, uncertainties exist regarding the optimal conditions for sample storage. This study evaluated various technologies for processing and storing samples. Results enabled us to select a method for optimizing sample processing, with performance comparable to commercial methods and far lower costs. The experiments designed to assess the conservation of samples enabled us to obtain valuable information to guide the referral of samples from patient care centres to laboratories where such samples are processed by molecular methods. PMID:20589331
Severtson, Dustin; Flower, Ken; Nansen, Christian
2016-08-01
The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact. PMID:27371709
NASA Astrophysics Data System (ADS)
Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc
2015-03-01
The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Clague, D; Weisgraber, T; Rockway, J; McBride, K
2006-02-12
The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.
A sampling optimization analysis of soil-bugs diversity (Crustacea, Isopoda, Oniscidea).
Messina, Giuseppina; Cazzolla Gatti, Roberto; Droutsa, Angeliki; Barchitta, Martina; Pezzino, Elisa; Agodi, Antonella; Lombardo, Bianca Maria
2016-01-01
Biological diversity analysis is among the most informative approaches to describe communities and regional species compositions. Soil ecosystems include large numbers of invertebrates, among which soil bugs (Crustacea, Isopoda, Oniscidea) play significant ecological roles. The aim of this study was to provide advices to optimize the sampling effort, to efficiently monitor the diversity of this taxon, to analyze its seasonal patterns of species composition, and ultimately to understand better the coexistence of so many species over a relatively small area. Terrestrial isopods were collected at the Natural Reserve "Saline di Trapani e Paceco" (Italy), using pitfall traps monthly monitored over 2 years. We analyzed parameters of α- and β-diversity and calculated a number of indexes and measures to disentangle diversity patterns. We also used various approaches to analyze changes in biodiversity over time, such as distributions of species abundances and accumulation and rarefaction curves. As concerns species richness and total abundance of individuals, spring resulted the best season to monitor Isopoda, to reduce sampling efforts, and to save resources without losing information, while in both years abundances were maximum between summer and autumn. This suggests that evaluations of β-diversity are maximized if samples are first collected during the spring and then between summer and autumn. Sampling during these coupled seasons allows to collect a number of species close to the γ-diversity (24 species) of the area. Finally, our results show that seasonal shifts in community composition (i.e., dynamic fluctuations in species abundances during the four seasons) may minimize competitive interactions, contribute to stabilize total abundances, and allow the coexistence of phylogenetically close species within the ecosystem. PMID:26811784
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074
Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S
2016-03-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. PMID:26965231
NASA Astrophysics Data System (ADS)
Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.
2012-01-01
There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.
A simple optimized microwave digestion method for multielement monitoring in mussel samples
NASA Astrophysics Data System (ADS)
Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.
2004-04-01
With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.
Fauvelle, Vincent; Mazzella, Nicolas; Belles, Angel; Moreira, Aurélie; Allan, Ian J; Budzinski, Hélène
2014-05-01
This paper presents an optimization of the pharmaceutical Polar Organic Chemical Integrative Sampler (POCIS-200) under controlled laboratory conditions for the sampling of acidic (2,4-dichlorophenoxyacetic acid (2,4-D), acetochlor ethanesulfonic acid (ESA), acetochlor oxanilic acid, bentazon, dicamba, mesotrione, and metsulfuron) and polar (atrazine, diuron, and desisopropylatrazine) herbicides in water. Indeed, the conventional configuration of the POCIS-200 (46 cm(2) exposure window, 200 mg of Oasis® hydrophilic lipophilic balance (HLB) receiving phase) is not appropriate for the sampling of very polar and acidic compounds because they rapidly reach a thermodynamic equilibrium with the Oasis HLB receiving phase. Thus, we investigated several ways to extend the initial linear accumulation. On the one hand, increasing the mass of sorbent to 600 mg resulted in sampling rates (R s s) twice as high as those observed with 200 mg (e.g., 287 vs. 157 mL day(-1) for acetochlor ESA). Although detection limits could thereby be reduced, most acidic analytes followed a biphasic uptake, proscribing the use of the conventional first-order model and preventing us from estimating time-weighted average concentrations. On the other hand, reducing the exposure window (3.1 vs. 46 cm(2)) allowed linear accumulations of all analytes over 35 days, but R s s were dramatically reduced (e.g., 157 vs. 11 mL day(-1) for acetochlor ESA). Otherwise, the observation of biphasic releases of performance reference compounds (PRC), though mirroring acidic herbicide biphasic uptake, might complicate the implementation of the PRC approach to correct for environmental exposure conditions. PMID:24691721
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors.
Riva-Murray, Karen; Bradley, Paul M; Scudder Eikenberry, Barbara C; Knightes, Christopher D; Journey, Celeste A; Brigham, Mark E; Button, Daniel T
2013-06-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hg(fish)) divided by the water Hg concentration (Hg(water)) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hg(water) sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hg(water) estimates. Models were evaluated for parsimony, using Akaike's Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg-UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics. PMID:23668662
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors
Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.
2013-01-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.
Acharya, Munjal M.; Martirosian, Vahan; Christie, Lori-Ann; Riparip, Lara; Strnadel, Jan; Parihar, Vipan K.
2015-01-01
Past preclinical studies have demonstrated the capability of using human stem cell transplantation in the irradiated brain to ameliorate radiation-induced cognitive dysfunction. Intrahippocampal transplantation of human embryonic stem cells and human neural stem cells (hNSCs) was found to functionally restore cognition in rats 1 and 4 months after cranial irradiation. To optimize the potential therapeutic benefits of human stem cell transplantation, we have further defined optimal transplantation windows for maximizing cognitive benefits after irradiation and used induced pluripotent stem cell-derived hNSCs (iPSC-hNSCs) that may eventually help minimize graft rejection in the host brain. For these studies, animals given an acute head-only dose of 10 Gy were grafted with iPSC-hNSCs at 2 days, 2 weeks, or 4 weeks following irradiation. Animals receiving stem cell grafts showed improved hippocampal spatial memory and contextual fear-conditioning performance compared with irradiated sham-surgery controls when analyzed 1 month after transplantation surgery. Importantly, superior performance was evident when stem cell grafting was delayed by 4 weeks following irradiation compared with animals grafted at earlier times. Analysis of the 4-week cohort showed that the surviving grafted cells migrated throughout the CA1 and CA3 subfields of the host hippocampus and differentiated into neuronal (∼39%) and astroglial (∼14%) subtypes. Furthermore, radiation-induced inflammation was significantly attenuated across multiple hippocampal subfields in animals receiving iPSC-hNSCs at 4 weeks after irradiation. These studies expand our prior findings to demonstrate that protracted stem cell grafting provides improved cognitive benefits following irradiation that are associated with reduced neuroinflammation. PMID:25391646
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart
2010-12-01
A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer. PMID:20655159
Barau, Caroline; Furlan, Valérie; Debray, Dominique; Taburet, Anne-Marie; Barrail-Tran, Aurélie
2012-01-01
AIMS The aims were to estimate the mycophenolic acid (MPA) population pharmacokinetic parameters in paediatric liver transplant recipients, to identify the factors affecting MPA pharmacokinetics and to develop a limited sampling strategy to estimate individual MPA AUC(0,12 h). METHODS Twenty-eight children, 1.1 to 18.0 years old, received oral mycophenolate mofetil (MMF) therapy combined with either tacrolimus (n= 23) or ciclosporin (n= 5). The population parameters were estimated from a model-building set of 16 intensive pharmacokinetic datasets obtained from 16 children. The data were analyzed by nonlinear mixed effect modelling, using a one compartment model with first order absorption and first order elimination and random effects on the absorption rate (ka), the apparent volume of distribution (V/F) and apparent clearance (CL/F). RESULTS Two covariates, time since transplantation (≤ and >6 months) and age affected MPA pharmacokinetics. ka, estimated at 1.7 h−1 at age 8.7 years, exhibited large interindividual variability (308%). V/F, estimated at 64.7 l, increased about 2.3 times in children during the immediate post transplantation period. This increase was due to the increase in the unbound MPA fraction caused by the low albumin concentration. CL/F was estimated at 12.7 l h−1. To estimate individual AUC(0,12 h), the pharmacokinetic parameters obtained with the final model, including covariates, were coded in Adapt II® software, using the Bayesian approach. The AUC(0,12 h) estimated from concentrations measured 0, 1 and 4 h after administration of MMF did not differ from reference values. CONCLUSIONS This study allowed the estimation of the population pharmacokinetic MPA parameters. A simple sampling procedure is suggested to help to optimize pediatric patient care. PMID:22329639
Chiang, Emily C; Shen, Shuren; Kengeri, Seema S; Xu, Huiping; Combs, Gerald F; Morris, J Steven; Bostwick, David G; Waters, David J
2009-01-01
Our work in dogs has revealed a U-shaped dose response between selenium status and prostatic DNA damage that remarkably parallels the relationship between dietary selenium and prostate cancer risk in men, suggesting that more selenium is not necessarily better. Herein, we extend this canine work to show that the selenium dose that minimizes prostatic DNA damage also maximizes apoptosis-a cancer-suppressing death switch used by prostatic epithelial cells. These provocative findings suggest a new line of thinking about how selenium can reduce cancer risk. Mid-range selenium status (.67-.92 ppm in toenails) favors a process we call "homeostatic housecleaning"-an upregulated apoptosis that preferentially purges damaged prostatic cells. Also, the U-shaped relationship provides valuable insight into stratifying individuals as selenium-responsive or selenium-refractory, based upon the likelihood of reducing their cancer risk by additional selenium. By studying elderly dogs, the only non-human animal model of spontaneous prostate cancer, we have established a robust experimental approach bridging the gap between laboratory and human studies that can help to define the optimal doses of cancer preventives for large-scale human trials. Moreover, our observations bring much needed clarity to the null results of the Selenium and Vitamin E Cancer Prevention Trial (SELECT) and set a new research priority: testing whether men with low, suboptimal selenium levels less than 0.8 ppm in toenails can achieve cancer risk reduction through daily supplementation. PMID:20877485
Acoustic investigations of lakes as justification for the optimal location of core sampling
NASA Astrophysics Data System (ADS)
Krylov, P.; Nourgaliev, D.; Yasonov, P.; Kuzin, D.
2014-12-01
Lacustrine sediments contain a long, high-resolution record of sedimentation processes associated with changes in the environment. Paleomagnetic study of the properties of these sediments provide a detailed trace the changes in the paleoenvironment. However, there are factors such as landslides, earthquakes, the presence of gas in the sediments affecting the disturbing sediment stratification. Seismic profiling allows to investigate in detail the bottom relief and get information about the thickness and structure of the deposits, which makes this method ideally suited for determining the configuration of the lake basin and the overlying lake sediment stratigraphy. Most seismic studies have concentrated on large and deep lakes containing a thick sedimentary sequence, but small and shallow lakes containing a thinner sedimentary column located in key geographic locations and geological settings can also provide a valuable record of Holocene history. Seimic data is crucial when choosing the optimal location of core sampling. Thus, continuous seismic profiling should be used regularly before coring lake sediments for the reconstruction of paleoclimate. We have carried out seismic profiling on lakes Balkhash (Kazakhstan), Yarovoye, Beloe, Aslykul and Chebarkul (Russia). The results of the field work will be presented in the report. The work is performed according to the Russian Government Program of Competitive Growth of Kazan Federal University also by RFBR research projects No. 14-05-31376 -a, 14-05-00785-a.
NASA Astrophysics Data System (ADS)
Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.
2007-07-01
The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Pietrzyńska, Monika; Voelkel, Adam
2014-11-01
In-needle extraction was applied for preparation of aqueous samples. This technique was used for direct isolation of analytes from liquid samples which was achieved by forcing the flow of the sample through the sorbent layer: silica or polymer (styrene/divinylbenzene). Specially designed needle was packed with three different sorbents on which the analytes (phenol, p-benzoquinone, 4-chlorophenol, thymol and caffeine) were retained. Acceptable sampling conditions for direct analysis of liquid sample were selected. Experimental data collected from the series of liquid samples analysis made with use of in-needle device showed that the effectiveness of the system depends on various parameters such as breakthrough volume and the sorption capacity, effect of sampling flow rate, solvent effect on elution step, required volume of solvent for elution step. The optimal sampling flow rate was in range of 0.5-2 mL/min, the minimum volume of solvent was at 400 µL level. PMID:25127610
2015-01-01
criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067
Bellanti, Francesco; Di Iorio, Vincenzo Luca; Danhof, Meindert; Della Pasqua, Oscar
2016-09-01
Despite wide clinical experience with deferiprone, the optimum dosage in children younger than 6 years remains to be established. This analysis aimed to optimize the design of a prospective clinical study for the evaluation of deferiprone pharmacokinetics in children. A 1-compartment model with first-order oral absorption was used for the purposes of the analysis. Different sampling schemes were evaluated under the assumption of a constrained population size. A sampling scheme with 5 samples per subject was found to be sufficient to ensure accurate characterization of the pharmacokinetics of deferiprone. Whereas the accuracy of parameters estimates was high, precision was slightly reduced because of the small sample size (CV% >30% for Vd/F and KA). Mean AUC ± SD was found to be 33.4 ± 19.2 and 35.6 ± 20.2 mg · h/mL, and mean Cmax ± SD was found to be 10.2 ± 6.1 and 10.9 ± 6.7 mg/L based on sparse and frequent sampling, respectively. The results showed that typical frequent sampling schemes and sample sizes do not warrant accurate model and parameter identifiability. Expectation of the determinant (ED) optimality and simulation-based optimization concepts can be used to support pharmacokinetic bridging studies. Of importance is the accurate estimation of the magnitude of the covariate effects, as they partly determine the dose recommendation for the population of interest. PMID:26785826
Pradeu, Thomas; Laplane, Lucie; Prévot, Karine; Hoquet, Thierry; Reynaud, Valentine; Fusco, Giuseppe; Minelli, Alessandro; Orgogozo, Virginie; Vervoort, Michel
2016-01-01
Is it possible, and in the first place is it even desirable, to define what "development" means and to determine the scope of the field called "developmental biology"? Though these questions appeared crucial for the founders of "developmental biology" in the 1950s, there seems to be no consensus today about the need to address them. Here, in a combined biological, philosophical, and historical approach, we ask whether it is possible and useful to define biological development, and, if such a definition is indeed possible and useful, which definition(s) can be considered as the most satisfactory. PMID:26969977
... of the American Society for Reproductive Medicine Defining infertility What is infertility? Infertility is “the inability to conceive after 12 months ... to conceive after 6 months is generally considered infertility. How common is it? Infertility affects 10%-15% ...
ERIC Educational Resources Information Center
Tholkes, Ben F.
1998-01-01
Defines camping risks and lists types and examples: (1) objective risk beyond control; (2) calculated risk based on personal choice; (3) perceived risk; and (4) reckless risk. Describes campers to watch ("immortals" and abdicators), and several "treatments" of risk: avoidance, safety procedures and well-trained staff, adequate insurance, and a…
Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K
2016-01-01
Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions. PMID:26298464
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Hilton, Paul; Robinson, Dudley
2011-06-01
This paper is a summary of the presentations made as Proposal 2-"Defining cure" to the 2nd Annual meeting of the ICI-Research Society, in Bristol, 16th June 2010. It reviews definitions of 'cure' and 'outcome', and considers the impact that varying definition may have on prevalence studies and cure rates. The difference between subjective and objective outcomes is considered, and the significance that these different outcomes may have for different stakeholders (e.g. clinicians, patients, carers, industry etc.) is discussed. The development of patient reported outcome measures and patient defined goals is reviewed, and consideration given to the use of composite end-points. A series of proposals are made by authors and discussants as to how currently validated outcomes should be applied, and where our future research activity in this area might be directed. PMID:21661023
Hewitt, Robert; Watson, Peter
2013-10-01
The term "biobank" first appeared in the scientific literature in 1996 and for the next five years was used mainly to describe human population-based biobanks. In recent years, the term has been used in a more general sense and there are currently many different definitions to be found in reports, guidelines and regulatory documents. Some definitions are general, including all types of biological sample collection facilities. Others are specific and limited to collections of human samples, sometimes just to population-based collections. In order to help resolve the confusion on this matter, we conducted a survey of the opinions of people involved in managing sample collections of all types. This survey was conducted using an online questionnaire that attracted 303 responses. The results show that there is consensus that the term biobank may be applied to biological collections of human, animal, plant or microbial samples; and that the term biobank should only be applied to sample collections with associated sample data, and to collections that are managed according to professional standards. There was no consensus on whether a collection's purpose, size or level of access should determine whether it is called a biobank. Putting these findings into perspective, we argue that a general, broad definition of biobank is here to stay, and that attention should now focus on the need for a universally-accepted, systematic classification of the different biobank types. PMID:24835262
Giuliano, Anna R.; Nielson, Carrie M.; Flores, Roberto; Dunne, Eileen F.; Abrahamsen, Martha; Papenfuss, Mary R.; Markowitz, Lauri E.; Smith, Danelle; Harris, Robin B.
2014-01-01
Background Human papillomavirus (HPV) infection in men contributes to infection and cervical disease in women as well as to disease in men. This study aimed to determine the optimal anatomic site(s) for HPV detection in heterosexual men. Methods A cross-sectional study of HPV infection was conducted in 463 men from 2003 to 2006. Urethral, glans penis/coronal sulcus, penile shaft/prepuce, scrotal, perianal, anal canal, semen, and urine samples were obtained. Samples were analyzed for sample adequacy and HPV DNA by polymerase chain reaction and genotyping. To determine the optimal sites for estimating HPV prevalence, site-specific prevalences were calculated and compared with the overall prevalence. Sites and combinations of sites were excluded until a recalculated prevalence was reduced by <5% from the overall prevalence. Results The overall prevalence of HPV was 65.4%. HPV detection was highest at the penile shaft (49.9% for the full cohort and 47.9% for the subcohort of men with complete sampling), followed by the glans penis/coronal sulcus (35.8% and 32.8%) and scrotum (34.2% and 32.8%). Detection was lowest in urethra (10.1% and 10.2%) and semen (5.3% and 4.8%) samples. Exclusion of urethra, semen, and either perianal, scrotal, or anal samples resulted in a <5% reduction in prevalence. Conclusions At a minimum, the penile shaft and the glans penis/coronal sulcus should be sampled in heterosexual men. A scrotal, perianal, or anal sample should also be included for optimal HPV detection. PMID:17955432
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them. PMID
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Hunt, Brian R.; Ott, Edward
2015-09-15
In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-12-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E; Kaminski, Clemens F; Szabó, Gábor; Erdélyi, Miklós
2014-03-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y
2011-12-01
There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample. PMID:22433892
NASA Astrophysics Data System (ADS)
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
Reynolds, Kaycee N; Loecke, Terrance D; Burgin, Amy J; Davis, Caroline A; Riveros-Iregui, Diego; Thomas, Steven A; St Clair, Martin A; Ward, Adam S
2016-06-21
Understanding linked hydrologic and biogeochemical processes such as nitrate loading to agricultural streams requires that the sampling bias and precision of monitoring strategies be known. An existing spatially distributed, high-frequency nitrate monitoring network covering ∼40% of Iowa provided direct observations of in situ nitrate concentrations at a temporal resolution of 15 min. Systematic subsampling of nitrate records allowed for quantification of uncertainties (bias and precision) associated with estimates of various nitrate parameters, including: mean nitrate concentration, proportion of samples exceeding the nitrate drinking water standard (DWS), peak (>90th quantile) nitrate concentration, and nitrate flux. We subsampled continuous records for 47 site-year combinations mimicking common, but labor-intensive, water-sampling regimes (e.g., time-interval, stage-triggered, and dynamic-discharge storm sampling). Our results suggest that time-interval sampling most efficiently characterized all nitrate parameters, except at coarse frequencies for nitrate flux. Stage-triggered storm sampling most precisely captured nitrate flux when less than 0.19% of possible 15 min observations for a site-year were used. The time-interval strategy had the greatest return on sampling investment by most precisely and accurately quantifying nitrate parameters per sampling effort. These uncertainty estimates can aid in designing sampling strategies focused on nitrate monitoring in the tile-drained Midwest or similar agricultural regions. PMID:27192208
The optimal process of self-sampling in fisheries: lessons learned in the Netherlands.
Kraan, M; Uhlmann, S; Steenbergen, J; Van Helmond, A T M; Van Hoof, L
2013-10-01
At-sea sampling of commercial fishery catches by observers is a relatively expensive exercise. The fact that an observer has to stay on-board for the duration of the trip results in clustered samples and effectively small sample sizes, whereas the aim is to make inferences regarding several trips from an entire fleet. From this perspective, sampling by fishermen themselves (self-sampling) is an attractive alternative, because a larger number of trips can be sampled at lower cost. Self-sampling should not be used too casually, however, as there are often issues of data-acceptance related to it. This article shows that these issues are not easily dealt with in a statistical manner. Improvements might be made if self-sampling is understood as a form of cooperative research. Cooperative research has a number of dilemmas and benefits associated with it. This article suggests that if the guidelines for cooperative research are taken into account, the benefits are more likely to materialize. Secondly, acknowledging the dilemmas, and consciously dealing with them might lay the basis to trust-building, which is an essential element in the acceptance of data derived from self-sampling programmes. PMID:24090557
Technology Transfer Automated Retrieval System (TEKTRAN)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
NASA Astrophysics Data System (ADS)
Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.
2009-06-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The ``full'' system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The ``reference'' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.
2009-12-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The "full" system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The "reference" system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Ramyachitra, D.; Sofia, M.; Manikandan, P.
2015-01-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils. PMID:26449847
NASA Astrophysics Data System (ADS)
Amat-Roldan, Ivan; Cormack, Iain G.; Artigas, David; Loza-Alvarez, Pablo
2004-09-01
In this paper we report the use of a starch as a non-linear medium for characterising ultrashort pulses. The starch suspension in water is sandwiched between a slide holder and a cover-slip and placed within the sample plane of the nonlinear microscope. This simple arrangement enables direct measurement of the pulse where they interact with the sample.
Yang, Yuanzhong; Boysen, Reinhard I; Hearn, Milton T W
2006-07-15
A versatile experimental approach is described to achieve very high sensitivity analysis of peptides by capillary electrophoresis-mass spectrometry with sheath flow configuration based on optimization of field-amplified sample injection. Compared to traditional hydrodynamic injection methods, signal enhancement in terms of detection sensitivity of the bioanalytes by more than 3000-fold can be achieved. The effects of injection conditions, composition of the acid and organic solvent in the sample solution, length of the water plug, sample injection time, and voltage on the efficiency of the sample stacking have been systematically investigated, with peptides in the low-nanomolar (10(-9) M) range readily detected under the optimized conditions. Linearity of the established stacking method was found to be excellent over 2 orders of magnitude of concentration. The method was further evaluated for the analysis of low concentration bioactive peptide mixtures and tryptic digests of proteins. A distinguishing feature of the described approach is that it can be employed directly for the analysis of low-abundance protein fragments generated by enzymatic digestion and a reversed-phase-based sample-desalting procedure. Thus, rapid identification of protein fragments as low-abundance analytes can be achieved with this new approach by comparison of the actual tandem mass spectra of selected peptides with the predicted fragmentation patterns using online database searching algorithms. PMID:16841892
Luczak, Magdalena; Marczak, Lukasz; Stobiecki, Maciej
2014-01-01
Shotgun proteomic methods involving iTRAQ (isobaric tags for relative and absolute quantitation) peptide labeling facilitate quantitative analyses of proteomes and searches for useful biomarkers. However, the plasma proteome's complexity and the highly dynamic plasma protein concentration range limit the ability of conventional approaches to analyze and identify a large number of proteins, including useful biomarkers. The goal of this paper is to elucidate the best approach for plasma sample pretreatment for MS- and iTRAQ-based analyses. Here, we systematically compared four approaches, which include centrifugal ultrafiltration, SCX chromatography with fractionation, affinity depletion, and plasma without fractionation, to reduce plasma sample complexity. We generated an optimized protocol for quantitative protein analysis using iTRAQ reagents and an UltrafleXtreme (Bruker Daltonics) MALDI TOF/TOF mass spectrometer. Moreover, we used a simple, rapid, efficient, but inexpensive sample pretreatment technique that generated an optimal opportunity for biomarker discovery. We discuss the results from the four sample pretreatment approaches and conclude that SCX chromatography without affinity depletion is the best plasma sample preparation pretreatment method for proteome analysis. Using this technique, we identified 1,780 unique proteins, including 1,427 that were quantified by iTRAQ with high reproducibility and accuracy. PMID:24988083
Alleviating Linear Ecological Bias and Optimal Design with Sub-sample Data
Glynn, Adam; Wakefield, Jon; Handcock, Mark S.; Richardson, Thomas S.
2009-01-01
Summary In this paper, we illustrate that combining ecological data with subsample data in situations in which a linear model is appropriate provides three main benefits. First, by including the individual level subsample data, the biases associated with linear ecological inference can be eliminated. Second, by supplementing the subsample data with ecological data, the information about parameters will be increased. Third, we can use readily available ecological data to design optimal subsampling schemes, so as to further increase the information about parameters. We present an application of this methodology to the classic problem of estimating the effect of a college degree on wages. We show that combining ecological data with subsample data provides precise estimates of this value, and that optimal subsampling schemes (conditional on the ecological data) can provide good precision with only a fraction of the observations. PMID:20052294
Improved estimates of forest vegetation structure and biomass with a LiDAR-optimized sampling design
NASA Astrophysics Data System (ADS)
Hawbaker, Todd J.; Keuler, Nicholas S.; Lesak, Adrian A.; Gobakken, Terje; Contrucci, Kirk; Radeloff, Volker C.
2009-06-01
LiDAR data are increasingly available from both airborne and spaceborne missions to map elevation and vegetation structure. Additionally, global coverage may soon become available with NASA's planned DESDynI sensor. However, substantial challenges remain to using the growing body of LiDAR data. First, the large volumes of data generated by LiDAR sensors require efficient processing methods. Second, efficient sampling methods are needed to collect the field data used to relate LiDAR data with vegetation structure. In this paper, we used low-density LiDAR data, summarized within pixels of a regular grid, to estimate forest structure and biomass across a 53,600 ha study area in northeastern Wisconsin. Additionally, we compared the predictive ability of models constructed from a random sample to a sample stratified using mean and standard deviation of LiDAR heights. Our models explained between 65 to 88% of the variability in DBH, basal area, tree height, and biomass. Prediction errors from models constructed using a random sample were up to 68% larger than those from the models built with a stratified sample. The stratified sample included a greater range of variability than the random sample. Thus, applying the random sample model to the entire population violated a tenet of regression analysis; namely, that models should not be used to extrapolate beyond the range of data from which they were constructed. Our results highlight that LiDAR data integrated with field data sampling designs can provide broad-scale assessments of vegetation structure and biomass, i.e., information crucial for carbon and biodiversity science.
O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A
2014-10-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960
O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.
2014-01-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER
The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...
Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David
2016-01-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.
Bai, Fang; Liao, Sha; Gu, Junfeng; Jiang, Hualiang; Wang, Xicheng; Li, Honglin
2015-04-27
Metalloproteins, particularly zinc metalloproteins, are promising therapeutic targets, and recent efforts have focused on the identification of potent and selective inhibitors of these proteins. However, the ability of current drug discovery and design technologies, such as molecular docking and molecular dynamics simulations, to probe metal-ligand interactions remains limited because of their complicated coordination geometries and rough treatment in current force fields. Herein we introduce a robust, multiobjective optimization algorithm-driven metalloprotein-specific docking program named MpSDock, which runs on a scheme similar to consensus scoring consisting of a force-field-based scoring function and a knowledge-based scoring function. For this purpose, in this study, an effective knowledge-based zinc metalloprotein-specific scoring function based on the inverse Boltzmann law was designed and optimized using a dynamic sampling and iteration optimization strategy. This optimization strategy can dynamically sample and regenerate decoy poses used in each iteration step of refining the scoring function, thus dramatically improving both the effectiveness of the exploration of the binding conformational space and the sensitivity of the ranking of the native binding poses. To validate the zinc metalloprotein-specific scoring function and its special built-in docking program, denoted MpSDockZn, an extensive comparison was performed against six universal, popular docking programs: Glide XP mode, Glide SP mode, Gold, AutoDock, AutoDock4Zn, and EADock DSS. The zinc metalloprotein-specific knowledge-based scoring function exhibited prominent performance in accurately describing the geometries and interactions of the coordination bonds between the zinc ions and chelating agents of the ligands. In addition, MpSDockZn had a competitive ability to sample and identify native binding poses with a higher success rate than the other six docking programs. PMID:25746437
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2015-10-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
NASA Astrophysics Data System (ADS)
Lin, Rongsheng; Burke, David T.; Burns, Mark A.
2004-03-01
In recent years, there has been tremendous interest in developing a highly integrated DNA analysis system using microfabrication techniques. With the success of incorporating sample injection, reaction, separation and detection onto a monolithic silicon device, addition of otherwise time-consuming components in macroworld such as sample preparation is gaining more and more attention. In this paper, we designed and fabricated a miniaturized device, capable of separating size-fractioned DNA sample and extracting the band of interest. In order to obtain pure target band, a novel technique utilizing shaping electric field is demonstrated. Both theoretical analysis and experimental data shows significant agreement in designing appropriate electrode structures to achieve the desired electric field distribution. This technique has a very simple fabrication procedure and can be readily added with other existing components to realize a highly integrated "lab-on-a-chip" system for DNA analysis.
Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats
Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579
Optimal sampling strategy for estimation of spatial genetic structure in tree populations.
Cavers, S; Degen, B; Caron, H; Lemes, M R; Margis, R; Salgueiro, F; Lowe, A J
2005-10-01
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms. PMID:16030529
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems. PMID:16499972
Fillers, W Steven
2004-12-01
Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system. PMID:15674027
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Kilambi, Himabindu V; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…
An evaluation of optimal methods for avian influenza virus sample collection
Technology Transfer Automated Retrieval System (TEKTRAN)
Sample collection and transport are critical components of any diagnostic testing program and due to the amount of avian influenza virus (AIV) testing in the U.S. and worldwide, small improvements in sensitivity and specificity can translate into substantial cost savings from better test accuracy. ...
Siebenmann, K. )
1993-10-01
The primary focus of the initial stages of a remedial investigation is to collect useful data for source identification and determination of the extent of soil contamination. To achieve this goal, soil samples should be collected at locations where the maximum concentration of contaminants exist. This study was conducted to determine the optimum strategy for selecting soil sample locations within a boring. Analytical results from soil samples collected during the remedial investigation of a Department of Defense Superfund site were used for the analysis. Trichloroethene (TCE) and tetrachloroethene (PCE) results were compared with organic vapor monitor (OVM) readings, lithologies, and organic carbon content to determine if these parameters can be used to choose soil sample locations in the field that contain the maximum concentration of these analytes within a soil boring or interval. The OVM was a handheld photoionization detector (PID) for screening the soil core to indicate areas of VOC contamination. The TCE and PCE concentrations were compared across lithologic contacts and within each lithologic interval. The organic content used for this analysis was visually estimated by the geologist during soil logging.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT
Piao, Xinglin; Hu, Yongli; Sun, Yanfeng; Yin, Baocai; Gao, Junbin
2014-01-01
The emerging low rank matrix approximation (LRMA) method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs) by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability. PMID:25490583
Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-12-19
The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.
Optimized methods for extracting circulating small RNAs from long-term stored equine samples.
Unger, Lucia; Fouché, Nathalie; Leeb, Tosso; Gerber, Vincent; Pacholewska, Alicja
2016-01-01
Circulating miRNAs in body fluids, particularly serum, are promising candidates for future routine biomarker profiling in various pathologic conditions in human and veterinary medicine. However, reliable standardized methods for miRNA extraction from equine serum and fresh or archived whole blood are sorely lacking. We systematically compared various miRNA extraction methods from serum and whole blood after short and long-term storage without addition of RNA stabilizing additives prior to freezing. Time of storage at room temperature prior to freezing did not affect miRNA quality in serum. Furthermore, we showed that miRNA of NGS-sufficient quality can be recovered from blood samples after >10 years of storage at -80 °C. This allows retrospective analyses of miRNAs from archived samples. PMID:27356979
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1976-01-01
Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna
2014-02-01
A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.
Anderson, R.; Christensen, C.; Horowitz, S.
2006-08-01
An optimization method based on the evaluation of a broad range of different combinations of specific energy efficiency and renewable-energy options is used to determine the least-cost pathway to the development of new homes with zero peak cooling demand. The optimization approach conducts a sequential search of a large number of possible option combinations and uses the most cost-effective alternatives to generate a least-cost curve to achieve home-performance levels ranging from a Title 24-compliant home to a home that uses zero net source energy on an annual basis. By evaluating peak cooling load reductions on the least-cost curve, it is then possible to determine the most cost-effective combination of energy efficiency and renewable-energy options that both maximize annual energy savings and minimize peak-cooling demand.
ERIC Educational Resources Information Center
Layne, L.
2012-01-01
The author looks at the meaning of specific terminology commonly used in student surveys: "effective teaching." The research seeks to determine if there is a difference in how "effective teaching" is defined by those taking student surveys and those interpreting the results. To investigate this difference, a sample group of professors and students…
Curran, Sarah; Rijsdijk, Fruhling; Martin, Neilson; Marusic, Katja; Asherson, Philip; Taylor, Eric; Sham, Pak
2003-05-15
We are taking a quantitative trait approach to the molecular genetic study of attention deficit hyperactivity disorder (ADHD) using a truncated case-control association design. An epidemiological sample of children aged 5 to 15 years was evaluated for symptoms of ADHD using a parent rating scale. Individuals scoring high or low on this scale were selected for further investigation with additional questionnaires and DNA analysis. Data in studies like this are typically complicated. In the study reported on here, individuals have from 1 to 4 questionnaires completed on them and the sample is composed of a mixture of singletons and siblings. In this paper, we describe how we used a genetic hierarchical model to fit our data, together with a twin dataset, in order to estimate genetic factor loadings. Correlation matrices were estimated for our data using a maximum likelihood approach to account for missing data. We describe how we used these results to create a composite score, the heritability of which was estimated to be acceptably high using the twin dataset. This score measures a quantitative dimension onto which molecular genetic data will be mapped. PMID:12707944
NASA Astrophysics Data System (ADS)
Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.
2008-12-01
In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.
Hyberts, Sven G.; Frueh, Dominique P.; Arthanari, Haribabu; Wagner, Gerhard
2010-01-01
Non-uniform sampling (NUS) enables recording of multidimensional NMR data at resolutions matching the resolving power of modern instruments without using excessive measuring time. However, in order to obtain satisfying results, efficient reconstruction methods are needed. Here we describe an optimized version of the Forward Maximum entropy (FM) reconstruction method, which can reconstruct up to three indirect dimensions. For complex datasets, such as NOESY spectra, the performance of the procedure is enhanced by a distillation procedure that reduces artifacts stemming from intense peaks. PMID:19705283