Annotating user-defined abstractions for optimization
Quinlan, D; Schordan, M; Vuduc, R; Yi, Q
2005-12-05
This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.
Metamodel defined multidimensional embedded sequential sampling criteria.
Turner, C. J.; Campbell, M. I.; Crawford, R. H.
2004-01-01
Collecting data to characterize an unknown space presents a series of challenges. Where in the space should data be collected? What regions are more valuable than others to sample? When have sufficient samples been acquired to characterize the space with some level of confidence? Sequential sampling techniques offer an approach to answering these questions by intelligently sampling an unknown space. Sampling decisions are made with criteria intended to preferentially search the space for desirable features. However, N-dimensional applications need efficient and effective criteria. This paper discusses the evolution of several such criteria based on an understanding of the behaviors of existing criteria, and desired criteria properties. The resulting criteria are evaluated with a variety of planar functions, and preliminary results for higher dimensional applications are also presented. In addition, a set of convergence criteria, intended to evaluate the effectiveness of further sampling are implemented. Using these sampling criteria, an effective metamodel representation of the unknown space can be generated at reasonable sampling costs. Furthermore, the use of convergence criteria allows conclusions to be drawn about the level of confidence in the metamodel, and forms the basis for evaluating the adequacy of the original sampling budget.
Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples
Shine, E. P.; Poirier, M. R.
2013-10-29
Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Defining Predictive Probability Functions for Species Sampling Models.
Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo
2013-01-01
We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.
Defining the Mars Ascent Problem for Sample Return
Whitehead, J
2008-07-31
Lifting geology samples off of Mars is both a daunting technical problem for propulsion experts and a cultural challenge for the entire community that plans and implements planetary science missions. The vast majority of science spacecraft require propulsive maneuvers that are similar to what is done routinely with communication satellites, so most needs have been met by adapting hardware and methods from the satellite industry. While it is even possible to reach Earth from the surface of the moon using such traditional technology, ascending from the surface of Mars is beyond proven capability for either solid or liquid propellant rocket technology. Miniature rocket stages for a Mars ascent vehicle would need to be over 80 percent propellant by mass. It is argued that the planetary community faces a steep learning curve toward nontraditional propulsion expertise, in order to successfully accomplish a Mars sample return mission. A cultural shift may be needed to accommodate more technical risk acceptance during the technology development phase.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Initial data sampling in design optimization
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Terry H.
2011-06-01
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
Ecological and sampling constraints on defining landscape fire severity
Key, C.H.
2006-01-01
Ecological definition and detection of fire severity are influenced by factors of spatial resolution and timing. Resolution determines the aggregation of effects within a sampling unit or pixel (alpha variation), hence limiting the discernible ecological responses, and controlling the spatial patchiness of responses distributed throughout a burn (beta variation). As resolution decreases, alpha variation increases, extracting beta variation and complexity from the spatial model of the whole burn. Seasonal timing impacts the quality of radiometric data in terms of transmittance, sun angle, and potential contrast between responses within burns. Detection sensitivity candegrade toward the end of many fire seasons when low sun angles, vegetation senescence, incomplete burning, hazy conditions, or snow are common. Thus, a need exists to supersede many rapid response applications when remote sensing conditions improve. Lag timing, or timesince fire, notably shapes the ecological character of severity through first-order effects that only emerge with time after fire, including delayed survivorship and mortality. Survivorship diminishes the detected magnitude of severity, as burned vegetation remains viable and resprouts, though at first it may appear completely charred or consumed above ground. Conversely, delayed mortality increases the severity estimate when apparently healthy vegetation is in fact damaged by heat to the extent that it dies over time. Both responses dependon fire behavior and various species-specific adaptations to fire that are unique to the pre-firecomposition of each burned area. Both responses can lead initially to either over- or underestimating severity. Based on such implications, three sampling intervals for short-term burn severity are identified; rapid, initial, and extended assessment, sampled within about two weeks, two months, and depending on the ecotype, from three months to one year after fire, respectively. Spatial and temporal
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Sampling Policy that Guarantees Reliability of Optimal Policy in Reinforcement Learning
NASA Astrophysics Data System (ADS)
Senda, Kei; Iwasaki, Yoshimitsu; Fujii, Shinji
This study defines the certification sampling that guarantees with specified reliability the optimal policy being correct to the real transition probability, where the optimal policy was derived from a estimated probability. It then discusses the sampling policy as follows that efficiently obtains the certification sampling. The the transition probability is estimated by sampling, and it leads the optimal policy. On the other hand, it calculates the desired accuracy of the estimated transition probability that is necessary to guarantee the correct optimal policy. This study proposes the sampling policy that efficiently achieves the certification sampling with the desired accuracy of the estimated transition probability. The proposed method is efficient in number of samples because it automatically selects states and actions to be sampled and stops sampling when the condition is satisfied.
Sample size and optimal sample design in tuberculosis surveys
Sánchez-Crespo, J. L.
1967-01-01
Tuberculosis surveys sponsored by the World Health Organization have been carried out in different communities during the last few years. Apart from the main epidemiological findings, these surveys have provided basic statistical data for use in the planning of future investigations. In this paper an attempt is made to determine the sample size desirable in future surveys that include one of the following examinations: tuberculin test, direct microscopy, and X-ray examination. The optimum cluster sizes are found to be 100-150 children under 5 years of age in the tuberculin test, at least 200 eligible persons in the examination for excretors of tubercle bacilli (direct microscopy) and at least 500 eligible persons in the examination for persons with radiological evidence of pulmonary tuberculosis (X-ray). Modifications of the optimum sample size in combined surveys are discussed. PMID:5300008
Optimal Sampling Strategies for Oceanic Applications
2009-01-01
Bluelink ocean data assimilation system ( BODAS ; Oke et al. 2005; 2008) that underpins BRAN is based on Ensemble Optimal Interpolation (EnOI). EnOI is well...Brassington, D. A. Griffin and A. Schiller, 2008: The Bluelink Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. Oke, P. R., M...1017. [published, refereed] Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. [published, refereed] Sakov, P., and P. R. Oke 2008
Blunden, Sarah; Galland, Barbara
2014-10-01
The main aim of this paper is to consider relevant theoretical and empirical factors defining optimal sleep, and assess the relative importance of each in developing a working definition for, or guidelines about, optimal sleep, particularly in children. We consider whether optimal sleep is an issue of sleep quantity or of sleep quality. Sleep quantity is discussed in terms of duration, timing, variability and dose-response relationships. Sleep quality is explored in relation to continuity, sleepiness, sleep architecture and daytime behaviour. Potential limitations of sleep research in children are discussed, specifically the loss of research precision inherent in sleep deprivation protocols involving children. We discuss which outcomes are the most important to measure. We consider the notion that insufficient sleep may be a totally subjective finding, is impacted by the age of the reporter, driven by socio-cultural patterns and sleep-wake habits, and that, in some individuals, the driver for insufficient sleep can be viewed in terms of a cost-benefit relationship, curtailing sleep in order to perform better while awake. We conclude that defining optimal sleep is complex. The only method of capturing this elusive concept may be by somnotypology, taking into account duration, quality, age, gender, race, culture, the task at hand, and an individual's position in both sleep-alert and morningness-eveningness continuums. At the experimental level, a unified approach by researchers to establish standardized protocols to evaluate optimal sleep across paediatric age groups is required.
NASA Technical Reports Server (NTRS)
Byrnes, C. I.
1980-01-01
It is noted that recent work by Kamen (1979) on the stability of half-plane digital filters shows that the problem of the existence of a feedback law also arises for other Banach algebras in applications. This situation calls for a realization theory and stabilizability criteria for systems defined over Banach for Frechet algebra A. Such a theory is developed here, with special emphasis placed on the construction of finitely generated realizations, the existence of coprime factorizations for T(s) defined over A, and the solvability of the quadratic optimal control problem and the associated algebraic Riccati equation over A.
A Source-to-Source Architecture for User-Defined Optimizations
Schordan, M; Quinlan, D
2003-02-06
The performance of object-oriented applications often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are user-defined and not part of the programming language itself only limited information on their high-level semantics can be leveraged through program analysis by the compiler and thus most often no appropriate high-level optimizations are performed. In this paper we outline an approach based on source-to-source transformation to allow users to define optimizations which are not performed by the compiler they use. These techniques are intended to be as easy and intuitive as possible for potential users; i.e. for designers of object-oriented libraries, people most often only with basic compiler expertise.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Towards optimal sampling schedules for integral pumping tests
NASA Astrophysics Data System (ADS)
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
Towards optimal sampling schedules for integral pumping tests.
Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario
2011-06-01
Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations C(av) and mass flow rates M(CP). Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the C(av) estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.
Optimizing sparse sampling for 2D electronic spectroscopy
NASA Astrophysics Data System (ADS)
Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias
2017-02-01
We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.
Design optimality for models defined by a system of ordinary differential equations.
Rodríguez-Díaz, Juan M; Sánchez-León, Guillermo
2014-09-01
Many scientific processes, specially in pharmacokinetics (PK) and pharmacodynamics (PD) studies, are defined by a system of ordinary differential equations (ODE). If there are unknown parameters that need to be estimated, the optimal experimental design approach offers quality estimators for the different objectives of the practitioners. When computing optimal designs the standard procedure uses the linearization of the analytical expression of the ODE solution, which is not feasible when this analytical form does not exist. In this work some methods to solve this problem are described and discussed. Optimal designs for two well-known example models, Iodine and Michaelis-Menten, have been computed using the proposed methods. A thorough study has been done for a specific two-parameter PK model, the biokinetic model of ciprofloxacin and ofloxacin, computing the best designs for different optimality criteria and numbers of points. The designs have been compared according to their efficiency, and the goodness of the designs for the estimation of each parameter has been checked. Although the objectives of the paper are focused on the optimal design field, the methodology can be used as well for a sensitivity analysis of ordinary differential equation systems.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
An Optimization-Based Sampling Scheme for Phylogenetic Trees
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Lorber, K.; Czaja, A. D.
2014-12-01
Recent studies suggest that Mars contains more potentially life-supporting habitats (either in the present or past), than once thought. The key to finding life on Mars, whether extinct or extant, is to first understand which biomarkers and biosignatures are strictly biogenic in origin. Studying ancient habitats and fossil organisms of the early Earth can help to characterize potential Martian habitats and preserved life. This study, which focuses on the preservation of fossil microorganisms from the Archean Eon, aims to help define in part the science methods needed for a Mars sample return mission, of which, the Mars 2020 rover mission is the first step.Here is reported variations in the geochemical and morphological preservation of filamentous fossil microorganisms (microfossils) collected from the 2.5-billion-year-old Gamohaan Formation of the Kaapvaal Craton of South Africa. Samples of carbonaceous chert were collected from outcrop and drill core within ~1 km of each other. Specimens from each location were located within thin sections and their biologic morphologies were confirmed using confocal laser scanning microscopy. Raman spectroscopic analyses documented the carbonaceous nature of the specimens and also revealed variations in the level of geochemical preservation of the kerogen that comprises the fossils. The geochemical preservation of kerogen is principally thought to be a function of thermal alteration, but the regional geology indicates all of the specimens experienced the same thermal history. It is hypothesized that the fossils contained within the outcrop samples were altered by surface weathering, whereas the drill core samples, buried to a depth of ~250 m, were not. This differential weathering is unusual for cherts that have extremely low porosities. Through morphological and geochemical characterization of the earliest known forms of fossilized life on the earth, a greater understanding of the origin of evolution of life on Earth is gained
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.
Optimized method for dissolved hydrogen sampling in groundwater.
Alter, Marcus D; Steiof, Martin
2005-06-01
Dissolved hydrogen concentrations are used to characterize redox conditions of contaminated aquifers. The currently accepted and recommended bubble strip method for hydrogen sampling (Wiedemeier et al., 1998) requires relatively long sampling times and immediate field analysis. In this study we present methods for optimized sampling and for sample storage. The bubble strip sampling method was examined for various flow rates, bubble sizes (headspace volume in the sampling bulb) and two different H2 concentrations. The results were compared to a theoretical equilibration model. Turbulent flow in the sampling bulb was optimized for gas transfer by reducing the inlet diameter. Extraction with a 5 mL headspace volume and flow rates higher than 100 mL/min resulted in 95-100% equilibrium within 10-15 min. In order to investigate the storage of samples from the gas sampling bulb gas samples were kept in headspace vials for varying periods. Hydrogen samples (4.5 ppmv, corresponding to 3.5 nM in liquid phase) could be stored up to 48 h and 72 h with a recovery rate of 100.1+/-2.6% and 94.6+/-3.2%, respectively. These results are promising and prove the possibility of storage for 2-3 days before laboratory analysis. The optimized method was tested at a field site contaminated with chlorinated solvents. Duplicate gas samples were stored in headspace vials and analyzed after 24 h. Concentrations were measured in the range of 2.5-8.0 nM corresponding to known concentrations in reduced aquifers.
Wang, Li-Jun; Xiong, Xian-Rong; Zhang, Hui; Li, Yan-Yan; Li, Qian; Wang, Yong-Sheng; Xu, Wen-Bing; Hua, Song; Zhang, Yong
2012-12-01
The objective was to establish an efficient defined culture medium for bovine somatic cell nuclear transfer (SCNT) embryos. In this study, modified synthetic oviductal fluid (mSOF) without bovine serum albumin (BSA) was used as the basic culture medium (BCM), whereas the control medium was BCM with BSA. In Experiment 1, adding polyvinyl alcohol (PVA) to BCM supported development of SCNT embryos to blastocyst stage, but blastocyst formation rate and blastocyst cell number were both lower (P < 0.05) compared to the undefined group (6.1 vs. 32.6% and 67.3 ± 3.4 vs. 109.3 ± 4.5, respectively). In Experiment 2, myo-inositol, a combination of insulin, transferrin and selenium (ITS), and epidermal growth factor (EGF) were added separately to PVA-supplemented BCM. The blastocyst formation rate and blastocyst cell number of those three groups were dramatically improved compared with that of PVA-supplemented group in Experiment 1 (18.5, 23.0, 24.1 vs. 6.1% and 82.7 ± 2.0, 84.3 ± 4.2, 95.3 ± 3.8 vs. 67.3 ± 3.4, respectively, P < 0.05), but were still lower compared with that of undefined group (33.7% and 113.8 ± 3.4, P < 0.05). In Experiment 3, when a combination of myo-inositol, ITS and EGF were added to PVA-supplemented BCM, blastocyst formation rate and blastocyst cell number were similar to that of undefined group (30.4 vs. 31.1% and 109.3 ± 4.4 vs. 112.0 ± 3.6, P > 0.05). In Experiment 4, when blastocysts were cryopreserved and subsequently thawed, there were no significant differences between the optimized defined group (Experiment 3) and undefined group in survival rate and 24 and 48 h hatching blastocyst rates. Furthermore, there were no significant differences in expression levels of H19, HSP70 and BAX in blastocysts derived from optimized defined medium and undefined medium, although the relative expression abundance of IGF-2 was significantly decreased in the former. In conclusion, a defined culture medium containing PVA, myo-inositol, ITS, and EGF
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics
Pikulski, M. Shiroka, T.; Ott, H.-R.; Mesot, J.
2014-09-15
We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs.
'Optimal thermal range' in ectotherms: Defining criteria for tests of the temperature-size-rule.
Walczyńska, Aleksandra; Kiełbasa, Anna; Sobczyk, Mateusz
2016-08-01
Thermal performance curves for population growth rate r (a measure of fitness) were estimated over a wide range of temperature for three species: Coleps hirtus (Protista), Lecane inermis (Rotifera) and Aeolosoma hemprichi (Oligochaeta). We measured individual body size and examined if predictions for the temperature-size rule (TSR) were valid for different temperatures. All three organisms investigated follow the TSR, but only over a specific range between minimal and optimal temperatures, while maintenance at temperatures beyond this range showed the opposite pattern in these taxa. We consider minimal and optimal temperatures to be species-specific, and moreover delineate a physiological range outside of which an ectotherm is constrained against displaying size plasticity in response to temperature. This thermal range concept has important implications for general size-temperature studies. Furthermore, the concept of 'operating thermal conditions' may provide a new approach to (i) defining criteria required for investigating and interpreting temperature effects, and (ii) providing a novel interpretation for many cases in which species do not conform to the TSR.
Defining the optimal sequence for the systemic treatment of metastatic breast cancer.
Mestres, J A; iMolins, A B; Martínez, L C; López-Muñiz, J I C; Gil, E C; de Juan Ferré, A; Del Barco Berrón, S; Pérez, Y F; Mata, J G; Palomo, A G; Gregori, J G; Pardo, P G; Mañas, J J I; Hernández, A L; de Dueñas, E M; Jáñez, N M; Murillo, S M; Bofill, J S; Auñón, P Z; Sanchez-Rovira, P
2017-02-01
Metastatic breast cancer is a heterogeneous disease that presents in varying forms, and a growing number of therapeutic options makes it difficult to determine the best choice in each particular situation. When selecting a systemic treatment, it is important to consider the medication administered in the previous stages, such as acquired resistance, type of progression, time to relapse, tumor aggressiveness, age, comorbidities, pre- and post-menopausal status, and patient preferences. Moreover, tumor genomic signatures can identify different subtypes, which can be used to create patient profiles and design specific therapies. However, there is no consensus regarding the best treatment sequence for each subgroup of patients. During the SABCC Congress of 2014, specialized breast cancer oncologists from referral hospitals in Europe met to define patient profiles and to determine specific treatment sequences for each one. Conclusions were then debated in a final meeting in which a relative degree of consensus for each treatment sequence was established. Four patient profiles were defined according to established breast cancer phenotypes: pre-menopausal patients with luminal subtype, post-menopausal patients with luminal subtype, patients with triple-negative subtype, and patients with HER2-positive subtype. A treatment sequence was then defined, consisting of hormonal therapy with tamoxifen, aromatase inhibitors, fulvestrant, and mTOR inhibitors for pre- and post-menopausal patien ts; a chemotherapy sequence for the first, second, and further lines for luminal and triple-negative patients; and an optimal sequence for treatment with new antiHER2 therapies. Finally, a document detailing all treatment sequences, that had the agreement of all the oncologists, was drawn up as a guideline and advocacy tool for professionals treating patients with this disease.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Defining an optimal surface chemistry for pluripotent stem cell culture in 2D and 3D
NASA Astrophysics Data System (ADS)
Zonca, Michael R., Jr.
Surface chemistry is critical for growing pluripotent stem cells in an undifferentiated state. There is great potential to engineer the surface chemistry at the nanoscale level to regulate stem cell adhesion. However, the challenge is to identify the optimal surface chemistry of the substrata for ES cell attachment and maintenance. Using a high-throughput polymerization and screening platform, a chemically defined, synthetic polymer grafted coating that supports strong attachment and high expansion capacity of pluripotent stem cells has been discovered using mouse embryonic stem (ES) cells as a model system. This optimal substrate, N-[3-(Dimethylamino)propyl] methacrylamide (DMAPMA) that is grafted on 2D synthetic poly(ether sulfone) (PES) membrane, sustains the self-renewal of ES cells (up to 7 passages). DMAPMA supports cell attachment of ES cells through integrin beta1 in a RGD-independent manner and is similar to another recently reported polymer surface. Next, DMAPMA has been able to be transferred to 3D by grafting to synthetic, polymeric, PES fibrous matrices through both photo-induced and plasma-induced polymerization. These 3D modified fibers exhibited higher cell proliferation and greater expression of pluripotency markers of mouse ES cells than 2D PES membranes. Our results indicated that desirable surfaces in 2D can be scaled to 3D and that both surface chemistry and structural dimension strongly influence the growth and differentiation of pluripotent stem cells. Lastly, the feasibility of incorporating DMAPMA into a widely used natural polymer, alginate, has been tested. Novel adhesive alginate hydrogels have been successfully synthesized by either direct polymerization of DMAPMA and methacrylic acid blended with alginate, or photo-induced DMAPMA polymerization on alginate nanofibrous hydrogels. In particular, DMAPMA-coated alginate hydrogels support strong ES cell attachment, exhibiting a concentration dependency of DMAPMA. This research provides a
Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.
Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V
2016-05-03
Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g
Defining Adult Experiences: Perspectives of a Diverse Sample of Young Adults
Lowe, Sarah R.; Dillon, Colleen O.; Rhodes, Jean E.; Zwiebach, Liza
2013-01-01
This study explored the roles and psychological experiences identified as defining adult moments using mixed methods with a racially, ethnically, and socioeconomically diverse sample of young adults both enrolled and not enrolled in college (N = 726; ages 18-35). First, we evaluated results from a single survey item that asked participants to rate how adult they feel. Consistent with previous research, the majority of participants (56.9%) reported feeling “somewhat like an adult,” and older participants had significantly higher subjective adulthood, controlling for other demographic variables. Next, we analyzed responses from an open-ended question asking participants to describe instances in which they felt like an adult. Responses covered both traditional roles (e.g., marriage, childbearing; 36.1%) and nontraditional social roles and experiences (e.g., moving out of parent’s home, cohabitation; 55.6%). Although we found no differences by age and college status in the likelihood of citing a traditional or nontraditional role, participants who had achieved more traditional roles were more likely to cite them in their responses. In addition, responses were coded for psychological experiences, including responsibility for self (19.0%), responsibility for others (15.3%), self-regulation (31.1%), and reflected appraisals (5.1%). Older participants were significantly more likely to include self-regulation and reflected appraisals, whereas younger participants were more likely to include responsibility for self. College students were more likely than noncollege students to include self-regulation and reflected appraisals. Implications for research and practice are discussed. PMID:23554545
Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.
Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly
2015-09-01
Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.
Optimal allocation of point-count sampling effort
Barker, R.J.; Sauer, J.R.; Link, W.A.
1993-01-01
Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
[Optimized sample preparation for metabolome studies on Streptomyces coelicolor].
Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian
2014-04-01
Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-05
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.
Classifier-Guided Sampling for Complex Energy System Optimization
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
Efficient infill sampling for unconstrained robust optimization problems
NASA Astrophysics Data System (ADS)
Rehman, Samee Ur; Langelaar, Matthijs
2016-08-01
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.
Harlow, Siobán D.; Mitchell, Ellen S.; Crawford, Sybil; Nan, Bin; Little, Roderick; Taffe, John
2008-01-01
Study objective Criteria for staging the menopausal transition are not established. This paper evaluates five bleeding criteria for defining early transition and provides empirically-based guidance regarding optimal criteria. Design/Setting Prospective menstrual calendar data from four population-based cohorts: TREMIN, Melbourne Women’s Midlife Health Project(MWMHP), Seattle Midlife Women’s Health Study(SMWHS), and Study of Women’s Health Across the Nation(SWAN) with annual serum follicle stimulating hormone (FSH) from MWMHP and SWAN. Participants 735 TREMIN, 279 SMWHS, 216 MWMHP, and 2270 SWAN women aged 35-57 at baseline who maintained menstrual calendars. Main outcome measure(s) Age at and time to menopause for: standard deviation >6 and >8 days, persistent difference in consecutive segments >6 days, irregularity, and >=45 day segment. Serum follicle stimulating hormone concentration. Results Most women experienced each of the bleeding criteria. Except for persistent >6 day difference which occurs earlier, the criteria occur at a similar age and at approximately the same age as late transition in a large proportion of women. FSH was associated with all proposed markers. Conclusions The early transition may be best described by ovarian activity consistent with the persistent >6 day difference, but further study is needed, as other proposed criterion are consistent with later menstrual changes. PMID:17681300
Optimizing passive acoustic sampling of bats in forests
Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K
2014-01-01
Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km2 scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
Optimized robust plasma sampling for glomerular filtration rate studies.
Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L
2012-09-01
In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement.
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Test samples for optimizing STORM super-resolution microscopy.
Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E
2013-09-06
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
A General Investigation of Optimized Atmospheric Sample Duration
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.
Optimal sampling frequency in recording of resistance training exercises.
Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis
2017-03-01
The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.
Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Validation of genetic algorithm-based optimal sampling for ocean data assimilation
NASA Astrophysics Data System (ADS)
Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.
2016-10-01
Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.
Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon
2005-12-01
The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Optimization of Evans blue quantitation in limited rat tissue samples
NASA Astrophysics Data System (ADS)
Wang, Hwai-Lee; Lai, Ted Weita
2014-10-01
Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
NSECT sinogram sampling optimization by normalized mutual information
NASA Astrophysics Data System (ADS)
Viana, Rodrigo S.; Galarreta-Valverde, Miguel A.; Mekkaoui, Choukri; Yoriyaz, Hélio; Jackowski, Marcel P.
2015-03-01
Neutron Stimulated Emission Computed Tomography (NSECT) is an emerging noninvasive imaging technique that measures the distribution of isotopes from biological tissue using fast-neutron inelastic scattering reaction. As a high-energy neutron beam illuminates the sample, the excited nuclei emit gamma rays whose energies are unique to the emitting nuclei. Tomographic images of each element in the spectrum can then be reconstructed to represent the spatial distribution of elements within the sample using a first generation tomographic scan. NSECT's high radiation dose deposition, however, requires a sampling strategy that can yield maximum image quality under a reasonable radiation dose. In this work, we introduce an NSECT sinogram sampling technique based on the Normalized Mutual Information (NMI) of the reconstructed images. By applying the Radon Transform on the ground-truth image obtained from a carbon-based synthetic phantom, different NSECT sinogram configurations were simulated and compared by using the NMI as a similarity measure. The proposed methodology was also applied on NSECT images acquired using MCNP5 Monte Carlo simulations of the same phantom to validate our strategy. Results show that NMI can be used to robustly predict the quality of the reconstructed NSECT images, leading to an optimal NSECT acquisition and a minimal absorbed dose by the patient.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.
Optimal probes for withdrawal of uncontaminated fluid samples
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2005-08-01
Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa
Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.
1996-01-01
Field and laboratory experiments indicate that a number of factors associated with filtration other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample) can produce significant variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk of these variations result from the inclusion/exclusion of colloidally associated trace elements in the filtrate, although dilution and sorption/desorption from filters also may be factors. Thus, dissolved trace element concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As such, simple filtration of unspecified volumes of natural water through unspecified 0.45-??m membrane filters may no longer represent an acceptable operational definition for a number of dissolved chemical constituents.
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Gegg, Katharina; Becht, Michael
2013-04-01
Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size
Arbellay, Estelle; Corona, Christophe; Stoffel, Markus; Fonti, Patrick; Decaulne, Armelle
2012-01-01
Vessels of broad-leaved trees have been analyzed to study how trees deal with various environmental factors. Cambial injury, in particular, has been reported to induce the formation of narrower conduits. Yet, little or no effort has been devoted to the elaboration of vessel sampling strategies for retrospective injury detection based on vessel lumen size reduction. To fill this methodological gap, four wounded individuals each of grey alder (Alnus incana (L.) Moench) and downy birch (Betula pubescens Ehrh.) were harvested in an avalanche path. Earlywood vessel lumina were measured and compared for each tree between the injury ring built during the growing season following wounding and the control ring laid down the previous year. Measurements were performed along a 10 mm wide radial strip, located directly next to the injury. Specifically, this study aimed at (i) investigating the intra-annual duration and local extension of vessel narrowing close to the wound margin and (ii) identifying an adequate sample of earlywood vessels (number and intra-ring location of cells) attesting to cambial injury. Based on the results of this study, we recommend analyzing at least 30 vessels in each ring. Within the 10 mm wide segment of the injury ring, wound-induced reduction in vessel lumen size did not fade with increasing radial and tangential distances, but we nevertheless advise favoring early earlywood vessels located closest to the injury. These findings, derived from two species widespread across subarctic, mountainous, and temperate regions, will assist retrospective injury detection in Alnus, Betula, and other diffuse-porous species as well as future related research on hydraulic implications after wounding.
Arbellay, Estelle; Corona, Christophe; Stoffel, Markus; Fonti, Patrick; Decaulne, Armelle
2012-01-01
Vessels of broad-leaved trees have been analyzed to study how trees deal with various environmental factors. Cambial injury, in particular, has been reported to induce the formation of narrower conduits. Yet, little or no effort has been devoted to the elaboration of vessel sampling strategies for retrospective injury detection based on vessel lumen size reduction. To fill this methodological gap, four wounded individuals each of grey alder (Alnus incana (L.) Moench) and downy birch (Betula pubescens Ehrh.) were harvested in an avalanche path. Earlywood vessel lumina were measured and compared for each tree between the injury ring built during the growing season following wounding and the control ring laid down the previous year. Measurements were performed along a 10 mm wide radial strip, located directly next to the injury. Specifically, this study aimed at (i) investigating the intra-annual duration and local extension of vessel narrowing close to the wound margin and (ii) identifying an adequate sample of earlywood vessels (number and intra-ring location of cells) attesting to cambial injury. Based on the results of this study, we recommend analyzing at least 30 vessels in each ring. Within the 10 mm wide segment of the injury ring, wound-induced reduction in vessel lumen size did not fade with increasing radial and tangential distances, but we nevertheless advise favoring early earlywood vessels located closest to the injury. These findings, derived from two species widespread across subarctic, mountainous, and temperate regions, will assist retrospective injury detection in Alnus, Betula, and other diffuse-porous species as well as future related research on hydraulic implications after wounding. PMID:22761707
Lin, Ming; Chen, Rong; Liang, Jie
2008-02-28
Proteins contain many voids, which are unfilled spaces enclosed in the interior. A few of them have shapes compatible to ligands and substrates and are important for protein functions. An important general question is how the need for maintaining functional voids is influenced by, and affects other aspects of proteins structures and properties (e.g., protein folding stability, kinetic accessibility, and evolution selection pressure). In this paper, we examine in detail the effects of maintaining voids of different shapes and sizes using two-dimensional lattice models. We study the propensity for conformations to form a void of specific shape, which is related to the entropic cost of void maintenance. We also study the location that voids of a specific shape and size tend to form, and the influence of compactness on the formation of such voids. As enumeration is infeasible for long chain polymer, a key development in this work is the design of a novel sequential Monte Carlo strategy for generating large number of sample conformations under very constraining restrictions. Our method is validated by comparing results obtained from sampling and from enumeration for short polymer chains. We succeeded in accurate estimation of entropic cost of void maintenance, with and without an increasing number of restrictive conditions, such as loops forming the wall of void with fixed length, with additionally fixed starting position in the sequence. Additionally, we have identified the key structural properties of voids that are important in determining the entropic cost of void formation. We have further developed a parametric model to predict quantitatively void entropy. Our model is highly effective, and these results indicate that voids representing functional sites can be used as an improved model for studying the evolution of protein functions and how protein function relates to protein stability.
NASA Astrophysics Data System (ADS)
Lin, Ming; Chen, Rong; Liang, Jie
2008-02-01
Proteins contain many voids, which are unfilled spaces enclosed in the interior. A few of them have shapes compatible to ligands and substrates and are important for protein functions. An important general question is how the need for maintaining functional voids is influenced by, and affects other aspects of proteins structures and properties (e.g., protein folding stability, kinetic accessibility, and evolution selection pressure). In this paper, we examine in detail the effects of maintaining voids of different shapes and sizes using two-dimensional lattice models. We study the propensity for conformations to form a void of specific shape, which is related to the entropic cost of void maintenance. We also study the location that voids of a specific shape and size tend to form, and the influence of compactness on the formation of such voids. As enumeration is infeasible for long chain polymer, a key development in this work is the design of a novel sequential Monte Carlo strategy for generating large number of sample conformations under very constraining restrictions. Our method is validated by comparing results obtained from sampling and from enumeration for short polymer chains. We succeeded in accurate estimation of entropic cost of void maintenance, with and without an increasing number of restrictive conditions, such as loops forming the wall of void with fixed length, with additionally fixed starting position in the sequence. Additionally, we have identified the key structural properties of voids that are important in determining the entropic cost of void formation. We have further developed a parametric model to predict quantitatively void entropy. Our model is highly effective, and these results indicate that voids representing functional sites can be used as an improved model for studying the evolution of protein functions and how protein function relates to protein stability.
Defining the Enterovirus Diversity Landscape of a Fecal Sample: A Methodological Challenge?
Faleye, Temitope Oluwasegun Cephas; Adewumi, Moses Olubusuyi; Adeniji, Johnson Adekunle
2016-01-12
Enteroviruses are a group of over 250 naked icosahedral virus serotypes that have been associated with clinical conditions that range from intrauterine enterovirus transmission withfataloutcome through encephalitis and meningitis, to paralysis. Classically, enterovirus detection was done by assaying for the development of the classic enterovirus-specific cytopathic effect in cell culture. Subsequently, the isolates were historically identified by a neutralization assay. More recently, identification has been done by reverse transcriptase-polymerase chain reaction (RT-PCR). However, in recent times, there is a move towards direct detection and identification of enteroviruses from clinical samples using the cell culture-independent RT semi-nested PCR (RT-snPCR) assay. This RT-snPCR procedure amplifies the VP1 gene, which is then sequenced and used for identification. However, while cell culture-based strategies tend to show a preponderance of certain enterovirus species depending on the cell lines included in the isolation protocol, the RT-snPCR strategies tilt in a different direction. Consequently, it is becoming apparent that the diversity observed in certain enterovirus species, e.g., enterovirus species B(EV-B), might not be because they are the most evolutionarily successful. Rather, it might stem from cell line-specific bias accumulated over several years of use of the cell culture-dependent isolation protocols. Furthermore, it might also be a reflection of the impact of the relative genome concentration on the result of pan-enterovirus VP1 RT-snPCR screens used during the identification of cell culture isolates. This review highlights the impact of these two processes on the current diversity landscape of enteroviruses and the need to re-assess enterovirus detection and identification algorithms in a bid to better balance our understanding of the enterovirus diversity landscape.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
Optimization for Peptide Sample Preparation for Urine Peptidomics
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.
2014-02-25
when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.
2016-06-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.
Cuypers, Koen; Thijs, Herbert; Meesen, Raf L J
2014-01-01
The goal of this study was to optimize the transcranial magnetic stimulation (TMS) protocol for acquiring a reliable estimate of corticospinal excitability (CSE) using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT) and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females) participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Chapman, Robert F; Karlsen, Trine; Resaland, Geir K; Ge, R-L; Harber, Matthew P; Witkowski, Sarah; Stray-Gundersen, James; Levine, Benjamin D
2014-03-15
Chronic living at altitudes of ∼2,500 m causes consistent hematological acclimatization in most, but not all, groups of athletes; however, responses of erythropoietin (EPO) and red cell mass to a given altitude show substantial individual variability. We hypothesized that athletes living at higher altitudes would experience greater improvements in sea level performance, secondary to greater hematological acclimatization, compared with athletes living at lower altitudes. After 4 wk of group sea level training and testing, 48 collegiate distance runners (32 men, 16 women) were randomly assigned to one of four living altitudes (1,780, 2,085, 2,454, or 2,800 m). All athletes trained together daily at a common altitude from 1,250-3,000 m following a modified live high-train low model. Subjects completed hematological, metabolic, and performance measures at sea level, before and after altitude training; EPO was assessed at various time points while at altitude. On return from altitude, 3,000-m time trial performance was significantly improved in groups living at the middle two altitudes (2,085 and 2,454 m), but not in groups living at 1,780 and 2,800 m. EPO was significantly higher in all groups at 24 and 48 h, but returned to sea level baseline after 72 h in the 1,780-m group. Erythrocyte volume was significantly higher within all groups after return from altitude and was not different between groups. These data suggest that, when completing a 4-wk altitude camp following the live high-train low model, there is a target altitude between 2,000 and 2,500 m that produces an optimal acclimatization response for sea level performance.
Dudefoi, William; Terrisse, Hélène; Richard-Plouet, Mireille; Gautron, Eric; Popa, Florin; Humbert, Bernard; Ropers, Marie-Hélène
2017-02-14
Titanium dioxide (TiO2) is a transition metal oxide widely used as a white pigment in various applications, including food. Due to the classification of TiO2 nanoparticles by the International Agency for Research on Cancer as potentially harmful for humans by inhalation, the presence of nanoparticles in food products needed to be confirmed by a set of independent studies. Seven samples of food-grade TiO2 (E171) were extensively characterised for their size distribution, crystallinity and surface properties by the currently recommended methods. All investigated E171 samples contained a fraction of nanoparticles, however, below the threshold defining the labelling of nanomaterial. On the basis of these results and a statistical analysis, E171 food-grade TiO2 totally differs from the reference material P25, confirming the few published data on this kind of particle. Therefore, the reference material P25 does not appear to be the most suitable model to study the fate of food-grade TiO2 in the gastrointestinal tract. The criteria currently to obtain a representative food-grade sample of TiO2 are the following: (1) crystalline-phase anatase, (2) a powder with an isoelectric point very close to 4.1, (3) a fraction of nanoparticles comprised between 15% and 45%, and (4) a low specific surface area around 10 m(2) g(-)(1).
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM DW-MRI USING OPTIMAL SAMPLING LATTICES
Ye, Wenxing; Entezari, Alireza; Vemuri, Baba C.
2010-01-01
This paper exploits the power of optimal sampling lattices in tomography based reconstruction of the diffusion propagator in diffusion weighted magnetic resonance imaging (DWMRI). Optimal sampling leads to increased accuracy of the tomographic reconstruction approach introduced by Pickalov and Basser [1]. Alternatively, the optimal sampling geometry allows for further reducing the number of samples while maintaining the accuracy of reconstruction of the diffusion propagator. The optimality of the proposed sampling geometry comes from the information theoretic advantages of sphere packing lattices in sampling multidimensional signals. These advantages are in addition to those accrued from the use of the tomographic principle used here for reconstruction. We present comparative results of reconstructions of the diffusion propagator using the Cartesian and the optimal sampling geometry for synthetic and real data sets. PMID:20596298
Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization
Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso
2013-01-01
The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321
Optimization of sample size in controlled experiments: the CLAST rule.
Botella, Juan; Ximénez, Carmen; Revuelta, Javier; Suero, Manuel
2006-02-01
Sequential rules are explored in the context of null hypothesis significance testing. Several studies have demonstrated that the fixed-sample stopping rule, in which the sample size used by researchers is determined in advance, is less practical and less efficient than sequential stopping rules. It is proposed that a sequential stopping rule called CLAST (composite limited adaptive sequential test) is a superior variant of COAST (composite open adaptive sequential test), a sequential rule proposed by Frick (1998). Simulation studies are conducted to test the efficiency of the proposed rule in terms of sample size and power. Two statistical tests are used: the one-tailed t test of mean differences with two matched samples, and the chi-square independence test for twofold contingency tables. The results show that the CLAST rule is more efficient than the COAST rule and reflects more realistically the practice of experimental psychology researchers.
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Brunelli, Davide; Caione, Carlo
2015-01-01
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203
Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.
Brunelli, Davide; Caione, Carlo
2015-07-10
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.
Optimal block sampling of routine, non-tumorous gallbladders.
Wong, Newton Acs
2017-03-08
Gallbladders are common specimens in routine histopathological practice and there are, at least in the United Kingdom and Australia, national guidance on how to sample gallbladders without macroscopically-evident, focal lesions/tumours (hereafter referred to as non-tumorous gallbladders).(1) Nonetheless, this author has seen considerable variation in the numbers of blocks used and the parts of the gallbladder sampled, even within one histopathology department. The recently re-issued 'Tissue pathways for gastrointestinal and pancreatobiliary pathology' from the Royal College of Pathologists (RCPath), first recommends sampling of the cystic duct margin and "at least one section each of neck, body and any focal lesion".(1) This recommendation is referenced by a textbook chapter which itself proposes that "cross-sections of the gallbladder fundus and lateral wall should be submitted, along with the sections from the neck of the gallbladder and cystic duct, including its margin".(2) This article is protected by copyright. All rights reserved.
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations.
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Optimization conditions of samples saponification for tocopherol analysis.
Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto
2014-09-01
A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h.
Optimizing analog-to-digital converters for sampling extracellular potentials.
Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan
2012-01-01
In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center
Müller, Hans-Helge; Pahl, Roman; Schäfer, Helmut
2007-12-01
We propose optimized two-stage designs for genome-wide case-control association studies, using a hypothesis testing paradigm. To save genotyping costs, the complete marker set is genotyped in a sub-sample only (stage I). On stage II, the most promising markers are then genotyped in the remaining sub-sample. In recent publications, two-stage designs were proposed which minimize the overall genotyping costs. To achieve full design optimization, we additionally include sampling costs into both the cost function and the design optimization. The resulting optimal designs differ markedly from those optimized for genotyping costs only (partially optimized designs), and achieve considerable further cost reductions. Compared with partially optimized designs, fully optimized two-stage designs have higher first-stage sample proportion. Furthermore, the increment of the sample size over the one-stage design, which is necessary in two-stage designs in order to compensate for the loss of power due to partial genotyping, is less pronounced for fully optimized two-stage designs. In addition, we address the scenario where the investigator is interested to gain as much information as possible, however is restricted in terms of a budget. In that we develop two-stage designs that maximize the power under a certain cost constraint.
Optimized Sampling Strategies For Non-Proliferation Monitoring: Report
Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.
2015-10-20
Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Serrano, Joan; Casanova-Martí, Àngela; Blay, Mayte; Terra, Ximena; Ardévol, Anna; Pinent, Montserrat
2016-01-01
Food intake depends on homeostatic and non-homeostatic factors. In order to use grape seed proanthocyanidins (GSPE) as food intake limiting agents, it is important to define the key characteristics of their bioactivity within this complex function. We treated rats with acute and chronic treatments of GSPE at different doses to identify the importance of eating patterns and GSPE dose and the mechanistic aspects of GSPE. GSPE-induced food intake inhibition must be reproduced under non-stressful conditions and with a stable and synchronized feeding pattern. A minimum dose of around 350 mg GSPE/kg body weight (BW) is needed. GSPE components act by activating the Glucagon-like peptide-1 (GLP-1) receptor because their effect is blocked by Exendin 9-39. GSPE in turn acts on the hypothalamic center of food intake control probably because of increased GLP-1 production in the intestine. To conclude, GSPE inhibits food intake through GLP-1 signaling, but it needs to be dosed under optimal conditions to exert this effect. PMID:27775601
Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P
2015-07-01
Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates.
NASA Astrophysics Data System (ADS)
Back, Pär-Erik
2007-04-01
A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Sampling of soil moisture fields and related errors: implications to the optimal sampling design
NASA Astrophysics Data System (ADS)
Yoo, Chulsang
Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985-992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35-45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.
NASA Astrophysics Data System (ADS)
Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.
2017-02-01
Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Ogungbenro, Kayode; Aarons, Leon
2009-01-01
This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry.
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
NASA Astrophysics Data System (ADS)
Subramanian, Nithya
Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Kirkpatrick, John P.; Wang, Zhiheng; Sampson, John H.; McSherry, Frances; Herndon, James E.; Allen, Karen J.; Duffy, Eileen; Hoang, Jenny K.; Chang, Zheng; Yoo, David S.; Kelsey, Chris R.; Yin, Fang-Fang
2015-01-01
Purpose: To identify an optimal margin about the gross target volume (GTV) for stereotactic radiosurgery (SRS) of brain metastases, minimizing toxicity and local recurrence. Methods and Materials: Adult patients with 1 to 3 brain metastases less than 4 cm in greatest dimension, no previous brain radiation therapy, and Karnofsky performance status (KPS) above 70 were eligible for this institutional review board–approved trial. Individual lesions were randomized to 1- or 3- mm uniform expansion of the GTV defined on contrast-enhanced magnetic resonance imaging (MRI). The resulting planning target volume (PTV) was treated to 24, 18, or 15 Gy marginal dose for maximum PTV diameters less than 2, 2 to 2.9, and 3 to 3.9 cm, respectively, using a linear accelerator–based image-guided system. The primary endpoint was local recurrence (LR). Secondary endpoints included neurocognition Mini-Mental State Examination, Trail Making Test Parts A and B, quality of life (Functional Assessment of Cancer Therapy-Brain), radionecrosis (RN), need for salvage radiation therapy, distant failure (DF) in the brain, and overall survival (OS). Results: Between February 2010 and November 2012, 49 patients with 80 brain metastases were treated. The median age was 61 years, the median KPS was 90, and the predominant histologies were non–small cell lung cancer (25 patients) and melanoma (8). Fifty-five, 19, and 6 lesions were treated to 24, 18, and 15 Gy, respectively. The PTV/GTV ratio, volume receiving 12 Gy or more, and minimum dose to PTV were significantly higher in the 3-mm group (all P<.01), and GTV was similar (P=.76). At a median follow-up time of 32.2 months, 11 patients were alive, with median OS 10.6 months. LR was observed in only 3 lesions (2 in the 1 mm group, P=.51), with 6.7% LR 12 months after SRS. Biopsy-proven RN alone was observed in 6 lesions (5 in the 3-mm group, P=.10). The 12-month DF rate was 45.7%. Three months after SRS, no significant change in
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.
Sample Optimization for Five Plant-Parasitic Nematodes in an Alfalfa Field
Goodell, P. B.; Ferris, H.
1981-01-01
A data base representing nematode counts and soil weight from 1,936 individual soil cores taken from a 7-ha alfalfa field was used to investigate sample optimization for five plant-parasitic nematodes: Meloidogyne arenaria, Pratylenchus minyus, Merlinius brevidens, Helicotylenchus digonicus, and Paratrichodorus minor. Sample plans were evaluated by the accuracy and reliability of their estimation of the population and by the cost of collecting, processing, and counting the samples. Interactive FORTRAN programs were constructed to simulate four collecting patterns: random; division of the field into square sub-units (cells); and division of the field into rectangular sub-traits (strips) running in two directions. Depending on the pattern, sample numbers varied from 1 to 25 with each sample representing from 1 to 50 cores. Each pattern, sample, and core combination was replicated 50 times. Strip stratification north/south was the most optimal sampling pattern in this field because it isolated a streak of fine-textured soil. The mathematical optimmn was not found because of data range limitations. When practical economic time constraints (5 hr to collect, process, and count nematode samples) are placed on the optimization process, all species estimates deviate no more than 25 % from the true mean. If accuracy constraints are placed on the process (no more than 15% deviation from true field mean), all species except Merlinius required less than 5 hr to complete the sample process. PMID:19300768
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Sample size calculation for testing differences between cure rates with the optimal log-rank test.
Wu, Jianrong
2017-01-01
In this article, sample size calculations are developed for use when the main interest is in the differences between the cure rates of two groups. Following the work of Ewell and Ibrahim, the asymptotic distribution of the weighted log-rank test is derived under the local alternative. The optimal log-rank test under the proportional distributions alternative is discussed, and sample size formulas for the optimal and standard log-rank tests are derived. Simulation results show that the proposed formulas provide adequate sample size estimation for trial designs and that the optimal log-rank test is more efficient than the standard log-rank test, particularly when both cure rates and percentages of censoring are small.
da Silva, Agnes Soares; Cardoso, Maria Regina; Meliefste, Kees; Brunekreef, Bert
2006-01-01
Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available. PMID:16772044
Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-01-01
Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
NASA Astrophysics Data System (ADS)
Kiesewetter, Simon; Drummond, Peter D.
2017-03-01
A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater
Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal
2016-01-01
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016
Optimal sample size allocation for Welch's test in one-way heteroscedastic ANOVA.
Shieh, Gwowen; Jan, Show-Li
2015-06-01
The determination of an adequate sample size is a vital aspect in the planning stage of research studies. A prudent strategy should incorporate all of the critical factors and cost considerations into sample size calculations. This study concerns the allocation schemes of group sizes for Welch's test in a one-way heteroscedastic ANOVA. Optimal allocation approaches are presented for minimizing the total cost while maintaining adequate power and for maximizing power performance for a fixed cost. The commonly recommended ratio of sample sizes is proportional to the ratio of the population standard deviations or the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Detailed numerical investigations have shown that these usual allocation methods generally do not give the optimal solution. The suggested procedures are illustrated using an example of the cost-efficiency evaluation in multidisciplinary pain centers.
van der Valk, J; Brunner, D; De Smet, K; Fex Svenningsen, A; Honegger, P; Knudsen, L E; Lindl, T; Noraberg, J; Price, A; Scarino, M L; Gstraunthaler, G
2010-06-01
Quality assurance is becoming increasingly important. Good laboratory practice (GLP) and good manufacturing practice (GMP) are now established standards. The biomedical field aims at an increasing reliance on the use of in vitro methods. Cell and tissue culture methods are generally fast, cheap, reproducible and reduce the use of experimental animals. Good cell culture practice (GCCP) is an attempt to develop a common standard for in vitro methods. The implementation of the use of chemically defined media is part of the GCCP. This will decrease the dependence on animal serum, a supplement with an undefined and variable composition. Defined media supplements are commercially available for some cell types. However, information on the formulation by the companies is often limited and such supplements can therefore not be regarded as completely defined. The development of defined media is difficult and often takes place in isolation. A workshop was organised in 2009 in Copenhagen to discuss strategies to improve the development and use of serum-free defined media. In this report, the results from the meeting are discussed and the formulation of a basic serum-free medium is suggested. Furthermore, recommendations are provided to improve information exchange on newly developed serum-free media.
Alizadeh, Taher; Ganjali, Mohammad Reza; Nourozi, Parviz; Zare, Mashaalah
2009-04-13
In this work a parathion selective molecularly imprinted polymer was synthesized and applied as a high selective adsorber material for parathion extraction and determination in aqueous samples. The method was based on the sorption of parathion in the MIP according to simple batch procedure, followed by desorption by using methanol and measurement with square wave voltammetry. Plackett-Burman and Box-Behnken designs were used for optimizing the solid-phase extraction, in order to enhance the recovery percent and improve the pre-concentration factor. By using the screening design, the effect of six various factors on the extraction recovery was investigated. These factors were: pH, stirring rate (rpm), sample volume (V(1)), eluent volume (V(2)), organic solvent content of the sample (org%) and extraction time (t). The response surface design was carried out considering three main factors of (V(2)), (V(1)) and (org%) which were found to be main effects. The mathematical model for the recovery percent was obtained as a function of the mentioned main effects. Finally the main effects were adjusted according to the defined desirability function. It was found that the recovery percents more than 95% could be easily obtained by using the optimized method. By using the experimental conditions, obtained in the optimization step, the method allowed parathion selective determination in the linear dynamic range of 0.20-467.4 microg L(-1), with detection limit of 49.0 ng L(-1) and R.S.D. of 5.7% (n=5). Parathion content of water samples were successfully analyzed when evaluating potentialities of the developed procedure.
Molina, Mariana; Steinbach, Simone; Park, Young Mok; Yun, Su Yeong; Di Lorenzo Alho, Ana Tereza; Heinsen, Helmut; Grinberg, Lea T; Marcus, Katrin; Leite, Renata E Paraizo; May, Caroline
2015-07-01
Brain function in normal aging and neurological diseases has long been a subject of interest. With current technology, it is possible to go beyond descriptive analyses to characterize brain cell populations at the molecular level. However, the brain comprises over 100 billion highly specialized cells, and it is a challenge to discriminate different cell groups for analyses. Isolating intact neurons is not feasible with traditional methods, such as tissue homogenization techniques. The advent of laser microdissection techniques promises to overcome previous limitations in the isolation of specific cells. Here, we provide a detailed protocol for isolating and analyzing neurons from postmortem human brain tissue samples. We describe a workflow for successfully freezing, sectioning and staining tissue for laser microdissection. This protocol was validated by mass spectrometric analysis. Isolated neurons can also be employed for western blotting or PCR. This protocol will enable further examinations of brain cell-specific molecular pathways and aid in elucidating distinct brain functions.
Shen, Xiong; Zong, Chao; Zhang, Guoqiang
2012-01-01
Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.
A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization
Tian, Shulin; Yang, Chenglin; Liu, Cheng
2016-01-01
The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Kramer, C S M; Roelen, D L; Heidt, S; Claas, F H J
2017-04-05
Transplantation of an human leukocyte antigen (HLA) mismatched graft can lead to the development of donor-specific antibodies (DSA), which can result in antibody mediated rejection and graft loss as well as complicate repeat transplantation. These DSA are induced by foreign epitopes present on the mismatched HLA antigens of the donor. However, not all epitopes appear to be equally effective in their ability to induce DSA. Understanding the characteristics of HLA epitopes is crucial for optimal epitope matching in clinical transplantation. In this review, the latest insights on HLA epitopes are described with a special focus on the definition of immunogenicity and antigenicity of HLA epitopes. Furthermore, the use of this knowledge to prevent HLA antibody formation and to select the optimal donor for sensitised transplant candidates will be discussed.
Toropov, A A; Benfenati, E
2008-05-01
The additive SMILES-based optimal descriptors have been used for modelling the bee toxicity. The influence of relative prevalence of the SMILES attributes in a training and test sets to the models for bee toxicity has been analysed. Avoiding the use of rare attributes improves statistical characteristics of the model on the external test set. The possibility of using the probability of the presence of SMILES attributes in training and test sets for rational definition of the applicability domain is discussed.
A method to optimize sampling locations for measuring indoor air distributions
NASA Astrophysics Data System (ADS)
Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan
2015-02-01
Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.
Sampling scheme optimization for diffuse optical tomography based on data and image space rankings
NASA Astrophysics Data System (ADS)
Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong
2016-10-01
We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.
Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik
2016-04-01
The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.
Zhao, Man; Chen, Xiaojing; Zhang, Hongtao; Yan, Husheng; Zhang, Huiqi
2014-05-12
A facile and highly efficient new approach (namely RAFT coupling chemistry) to obtain well-defined hydrophilic molecularly imprinted polymer (MIP) microspheres with excellent specific recognition ability toward small organic analytes in the real, undiluted biological samples is described. It involves the first synthesis of "living" MIP microspheres with surface-bound vinyl and dithioester groups via RAFT precipitation polymerization (RAFTPP) and their subsequent grafting of hydrophilic polymer brushes by the simple coupling reaction of hydrophilic macro-RAFT agents (i.e., hydrophilic polymers with a dithioester end group) with vinyl groups on the "living" MIP particles in the presence of a free radical initiator. The successful grafting of hydrophilic polymer brushes onto the obtained MIP particles was confirmed by SEM, FT-IR, static contact angle and water dispersion studies, elemental analyses, and template binding experiments. Well-defined MIP particles with densely grafted hydrophilic polymer brushes (∼1.8 chains/nm(2)) of desired chemical structures and molecular weights were readily obtained, which showed significantly improved surface hydrophilicity and could thus function properly in real biological media. The origin of the high grafting densities of the polymer brushes was clarified and the general applicability of the strategy was demonstrated. In particular, the well-defined characteristics of the resulting hydrophilic MIP particles allowed the first systematic study on the effects of various structural parameters of the grafted hydrophilic polymer brushes on their water-compatibility, which is of great importance for rationally designing more advanced real biological sample-compatible MIPs.
Sample volume optimization for radon-in-water detection by liquid scintillation counting.
Schubert, Michael; Kopitz, Juergen; Chałupnik, Stanisław
2014-08-01
Radon is used as environmental tracer in a wide range of applications particularly in aquatic environments. If liquid scintillation counting (LSC) is used as detection method the radon has to be transferred from the water sample into a scintillation cocktail. Whereas the volume of the cocktail is generally given by the size of standard LSC vials (20 ml) the water sample volume is not specified. Aim of the study was an optimization of the water sample volume, i.e. its minimization without risking a significant decrease in LSC count-rate and hence in counting statistics. An equation is introduced, which allows calculating the ²²²Rn concentration that was initially present in a water sample as function of the volumes of water sample, sample flask headspace and scintillation cocktail, the applicable radon partition coefficient, and the detected count-rate value. It was shown that water sample volumes exceeding about 900 ml do not result in a significant increase in count-rate and hence counting statistics. On the other hand, sample volumes that are considerably smaller than about 500 ml lead to noticeably lower count-rates (and poorer counting statistics). Thus water sample volumes of about 500-900 ml should be chosen for LSC radon-in-water detection, if 20 ml vials are applied.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no
Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame
NASA Technical Reports Server (NTRS)
Le Bail, Karine; Gordon, David
2010-01-01
Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples
Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Sturkenboom, Marieke G. G.; Mulder, Leonie W.; de Jager, Arthur; van Altena, Richard; Aarnoutse, Rob E.; de Lange, Wiel C. M.; Proost, Johannes H.; Kosterink, Jos G. W.; van der Werf, Tjip S.
2015-01-01
Rifampin, together with isoniazid, has been the backbone of the current first-line treatment of tuberculosis (TB). The ratio of the area under the concentration-time curve from 0 to 24 h (AUC0–24) to the MIC is the best predictive pharmacokinetic-pharmacodynamic parameter for determinations of efficacy. The objective of this study was to develop an optimal sampling procedure based on population pharmacokinetics to predict AUC0–24 values. Patients received rifampin orally once daily as part of their anti-TB treatment. A one-compartmental pharmacokinetic population model with first-order absorption and lag time was developed using observed rifampin plasma concentrations from 55 patients. The population pharmacokinetic model was developed using an iterative two-stage Bayesian procedure and was cross-validated. Optimal sampling strategies were calculated using Monte Carlo simulation (n = 1,000). The geometric mean AUC0–24 value was 41.5 (range, 13.5 to 117) mg · h/liter. The median time to maximum concentration of drug in serum (Tmax) was 2.2 h, ranging from 0.4 to 5.7 h. This wide range indicates that obtaining a concentration level at 2 h (C2) would not capture the peak concentration in a large proportion of the population. Optimal sampling using concentrations at 1, 3, and 8 h postdosing was considered clinically suitable with an r2 value of 0.96, a root mean squared error value of 13.2%, and a prediction bias value of −0.4%. This study showed that the rifampin AUC0–24 in TB patients can be predicted with acceptable accuracy and precision using the developed population pharmacokinetic model with optimal sampling at time points 1, 3, and 8 h. PMID:26055359
Optimal sample sizes for Welch's test under various allocation and cost considerations.
Jan, Show-Li; Shieh, Gwowen
2011-12-01
The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
JR Bontha; GR Golcar; N Hannigan
2000-08-29
The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.
Beliaeff, B.; Claisse, D.; Smith, P.J.
1995-12-31
In the French Monitoring Network, trace element and organic concentration in biota has been measured for 15 years on a quarterly basis at over 80 sites scattered along the French coastline. A reduction in the sampling effort may be needed as a result of budget restrictions. A constant budget, however, would allow the advancement of certain research and development projects, such as the feasibility of new chemical analysis. The basic problem confronting the program sampling design optimization is finding optimal numbers of sites in a given non-heterogeneous area and of sampling events within a year at each site. First, they determine a site specific cost function integrating analysis, personnel, and computer costs. Then, within-year and between-site variance components are estimated from the results of a linear model which includes a seasonal component. These two steps provide a cost-precision optimum for each contaminant. An example is given using the data from the 4 sites of the Loire estuary. Over all sites, significant `U`-shaped trends are estimated for Pb, PCBs, {Sigma}DDT and {alpha}-HCH, while PAHs show a significant inverted `U`-shaped curve. For most chemicals the within-year variance appears to be much higher than the between sites variance. This leads to the conclusion that, for this case, reducing the number of sites by two is preferable economically and in terms of monitoring efficiency to reducing the sampling frequency by the same factor. Further implications for the French Monitoring Network are discussed.
Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.
Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno
2010-07-01
The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.
Optimized Ar(+)-ion milling procedure for TEM cross-section sample preparation.
Dieterle, Levin; Butz, Benjamin; Müller, Erich
2011-11-01
High-quality samples are indispensable for every reliable transmission electron microscopy (TEM) investigation. In order to predict optimized parameters for the final Ar(+)-ion milling preparation step, topographical changes of symmetrical cross-section samples by the sputtering process were modeled by two-dimensional Monte-Carlo simulations. Due to its well-known sputtering yield of Ar(+)-ions and its easiness in mechanical preparation Si was used as model system. The simulations are based on a modified parameterized description of the sputtering yield of Ar(+)-ions on Si summarized from literature. The formation of a wedge-shaped profile, as commonly observed during double-sector ion milling of cross-section samples, was reproduced by the simulations, independent of the sputtering angle. Moreover, the preparation of wide, plane parallel sample areas by alternating single-sector ion milling is predicted by the simulations. These findings were validated by a systematic ion-milling study (single-sector vs. double-sector milling at various sputtering angles) using Si cross-section samples as well as two other material-science examples. The presented systematic single-sector ion-milling procedure is applicable for most Ar(+)-ion mills, which allow simultaneous milling from both sides of a TEM sample (top and bottom) in an azimuthally restricted sector perpendicular to the central epoxy line of that cross-sectional TEM sample. The procedure is based on the alternating milling of the two halves of the TEM sample instead of double-sector milling of the whole sample. Furthermore, various other practical aspects are issued like the dependency of the topographical quality of the final sample on parameters like epoxy thickness and incident angle.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
NASA Astrophysics Data System (ADS)
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Analysis of the optimal sampling rate for state estimation in sensor networks with delays.
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo
2017-03-27
When addressing the problem of state estimation in sensor networks, the effects of communications on estimator performance are often neglected. High accuracy requires a high sampling rate, but this leads to higher channel load and longer delays, which in turn worsens estimation performance. This paper studies the problem of determining the optimal sampling rate for state estimation in sensor networks from a theoretical perspective that takes into account traffic generation, a model of network behaviour and the effect of delays. Some theoretical results about Riccati and Lyapunov equations applied to sampled systems are derived, and a solution was obtained for the ideal case of perfect sensor information. This result is also interesting for non-ideal sensors, as in some cases it works as an upper bound of the optimisation solution.
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2015-12-01
Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil
Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W
2016-01-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.
Buehler, James W; Holtgrave, David R
2007-01-01
-based versus competitive allocation methods are needed to promote the optimal use of public health funds. In the meantime, those who use formula-based strategies to allocate funds should be familiar with the nuances of this approach. PMID:17394645
Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
An S/H circuit with parasitics optimized for IF-sampling
NASA Astrophysics Data System (ADS)
Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue
2016-06-01
An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Easton, D.F.; Goldgar, D.E.
1994-09-01
As genes underlying susceptibility to human disease are identified through linkage analysis, it is becoming increasingly clear that genetic heterogeneity is the rule rather than the exception. The focus of the present work is to examine the power and optimal sampling design for localizing a second disease gene when one disease gene has previously been identified. In particular, we examined the case when the unknown locus had lower penetrance, but higher frequency, than the known locus. Three scenarios regarding knowledge about locus 1 were examined: no linkage information (i.e. standard heterogeneity analysis), tight linkage with a known highly polymorphic marker locus, and mutation testing. Exact expected LOD scores (ELODs) were calculated for a number of two-locus genetic models under the 3 scenarios of heterogeneity for nuclear families containing 2, 3 or 4 affected children, with 0 or 1 affected parents. A cost function based upon the cost of ascertaining and genotyping sufficient samples to achieve an ELOD of 3.0 was used to evaluate the designs. As expected, the power and the optimal pedigree sampling strategy was dependent on the underlying model and the heterogeneity testing status. When the known locus had higher penetrance than the unknown locus, three affected siblings with unaffected parents proved to be optimal for all levels of heterogeneity. In general, mutation testing at the first locus provided substantially more power for detecting the second locus than linkage evidence alone. However, when both loci had relatively low penetrance, mutation testing provided little improvement in power since most families could be expected to be segregating the high risk allele at both loci.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to
Optimization of multi-channel neutron focusing guides for extreme sample environments
NASA Astrophysics Data System (ADS)
Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.
2014-07-01
In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.
In-line e-beam inspection with optimized sampling and newly developed ADC
NASA Astrophysics Data System (ADS)
Ikota, Masami; Miura, Akihiro; Fukunishi, Munenori; Hiroi, Takashi; Sugimoto, Aritoshi
2003-07-01
An electron beam inspection is strongly required for HARI to detect contact and via defects that an optical inspection cannot detect. Conventionally, an e-beam inspection system is used as an analytical tool for checking the process margin. Due to its low throughput speed, it has not been used for in-line QC. Therefore, we optimized the inspection area and developed a new auto defect classification (ADC) to use with e-beam inspection as an in-line inspection tool. A 10% interval scan sampling proved able to estimate defect densities. Inspection could be completed within 1 hour. We specifically adapted the developed ADC for use with e-beam inspection because the voltage contrast images were not sufficiently clear so that classifications could not be made with conventional ADC based on defect geometry. The new ADC used the off-pattern area of the defect to discriminate particles from other voltage contrast defects with an accuracy of greater than 90%. Using sampling optimization and the new ADC, we achieved inspection and auto defect review with throughput of less than 1 and one-half hours. We implemented the system as a procedure for product defect QC and proved its effectiveness for in-line e-beam inspection.
Mimicry among Unequally Defended Prey Should Be Mutualistic When Predators Sample Optimally.
Aubier, Thomas G; Joron, Mathieu; Sherratt, Thomas N
2017-03-01
Understanding the conditions under which moderately defended prey evolve to resemble better-defended prey and whether this mimicry is parasitic (quasi-Batesian) or mutualistic (Müllerian) is central to our understanding of warning signals. Models of predator learning generally predict quasi-Batesian relationships. However, predators' attack decisions are based not only on learning alone but also on the potential future rewards. We identify the optimal sampling strategy of predators capable of classifying prey into different profitability categories and contrast the implications of these rules for mimicry evolution with a classical Pavlovian model based on conditioning. In both cases, the presence of moderately unprofitable mimics causes an increase in overall consumption. However, in the case of the optimal sampling strategy, this increase in consumption is typically outweighed by the increase in overall density of prey sharing the model appearance (a dilution effect), causing a decrease in mortality. It suggests that if predators forage efficiently to maximize their long-term payoff, genuine quasi-Batesian mimicry should be rare, which may explain the scarcity of evidence for it in nature. Nevertheless, we show that when moderately defended mimics are profitable to attack by hungry predators, then they can be parasitic on their models, just as classical Batesian mimics are.
Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang
2016-08-01
Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA.
Optimization of a miniaturized DBD plasma chip for mercury detection in water samples.
Abdul-Majeed, Wameath S; Parada, Jaime H Lozano; Zimmerman, William B
2011-11-01
In this work, an optimization study was conducted to investigate the performance of a custom-designed miniaturized dielectric barrier discharge (DBD) microplasma chip to be utilized as a radiation source for mercury determination in water samples. The experimental work was implemented by using experimental design, and the results were assessed by applying statistical techniques. The proposed DBD chip was designed and fabricated in a simple way by using a few microscope glass slides aligned together and held by a Perspex chip holder, which proved useful for miniaturization purposes. Argon gas at 75-180 mL/min was used in the experiments as a discharge gas, while AC power in the range 75-175 W at 38 kHz was supplied to the load from a custom-made power source. A UV-visible spectrometer was used, and the spectroscopic parameters were optimized thoroughly and applied in the later analysis. Plasma characteristics were determined theoretically by analysing the recorded spectroscopic data. The estimated electron temperature (T(e) = 0.849 eV) was found to be higher than the excitation temperature (T(exc) = 0.55 eV) and the rotational temperature (T(rot) = 0.064 eV), which indicates non-thermal plasma is generated in the proposed chip. Mercury cold vapour generation experiments were conducted according to experimental plan by examining four parameters (HCl and SnCl(2) concentrations, argon flow rate, and the applied power) and considering the recorded intensity for the mercury line (253.65 nm) as the objective function. Furthermore, an optimization technique and statistical approaches were applied to investigate the individual and interaction effects of the tested parameters on the system performance. The calculated analytical figures of merit (LOD = 2.8 μg/L and RSD = 3.5%) indicates a reasonable precision system to be adopted as a basis for a miniaturized portable device for mercury detection in water samples.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir
2017-03-01
Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.
Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.
Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène
2016-06-01
Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported.
Severtson, Dustin; Flower, Ken; Nansen, Christian
2016-08-01
The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact.
Baykal, Cenk; Torres, Luis G; Alterovitz, Ron
2015-09-28
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790
Clague, D; Weisgraber, T; Rockway, J; McBride, K
2006-02-12
The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.
A Procedure to Determine the Optimal Sensor Positions for Locating AE Sources in Rock Samples
NASA Astrophysics Data System (ADS)
Duca, S.; Occhiena, C.; Sambuelli, L.
2015-03-01
Within a research work aimed to better understand frost weathering mechanisms of rocks, laboratory tests have been designed to specifically assess a theoretical model of crack propagation due to ice segregation process in water-saturated and thermally microcracked cubic samples of Arolla gneiss. As the formation and growth of microcracks during freezing tests on rock material is accompanied by a sudden release of stored elastic energy, the propagation of elastic waves can be detected, at the laboratory scale, by acoustic emission (AE) sensors. The AE receiver array geometry is a sensitive factor influencing source location errors, for it can greatly amplify the effect of small measurement errors. Despite the large literature on the AE source location, little attention, to our knowledge, has been paid to the description of the experimental design phase. As a consequence, the criteria for sensor positioning are often not declared and not related to location accuracy. In the present paper, a tool for the identification of the optimal sensor position on a cubic shape rock specimen is presented. The optimal receiver configuration is chosen by studying the condition numbers of each of the kernel matrices, used for inverting the arrival time and finding the source location, and obtained for properly selected combinations between sensors and sources positions.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Ferretti, James A; Tran, Hiep V; Cosgrove, Elizabeth; Protonentis, John; Loftin, Virginia; Conklin, Carol S; Grant, Robert N
2011-05-01
Currently, densities of Enterococcus in marine bathing beach samples are performed using conventional methods which require 24 h to obtain results. Real-time PCR methods are available which can measure results in as little as 3 h. The purpose of this study was to evaluate a more rapid test method for the determination of bacterial contamination in marine bathing beaches to better protect human health. The geometric mean of Enterococcus densities using Enterolert® defined substrate testing and membrane filtration ranged from 5.2 to 150 MPN or CFU/100mL and corresponding qPCR results ranged from 6.6 to 1785 CCE/100 mL. The regression analysis of these results showed a positive correlation between qPCR and conventional tests with an overall correlation (r) of 0.71. qPCR was found to provide accurate and sensitive estimate of Enterococcus densities and has the potential to be used as a rapid test method for the quantification of Enterococcus in marine waters.
Li, Mu; Rai, Alex J; DeCastro, G Joel; Zeringer, Emily; Barta, Timothy; Magdaleno, Susan; Setterquist, Robert; Vlassov, Alexander V
2015-10-01
Exosomes are RNA and protein-containing nanovesicles secreted by all cell types and found in abundance in body fluids, including blood, urine and cerebrospinal fluid. These vesicles seem to be a perfect source of biomarkers, as their cargo largely reflects the content of parental cells, and exosomes originating from all organs can be obtained from circulation through minimally invasive or non-invasive means. Here we describe an optimized procedure for exosome isolation and analysis using clinical samples, starting from quick and robust extraction of exosomes with Total exosome isolation reagent, then isolation of RNA followed by qRT-PCR. Effectiveness of this workflow is exemplified by analysis of the miRNA content of exosomes derived from serum samples - obtained from the patients with metastatic prostate cancer, treated prostate cancer patients who have undergone prostatectomy, and control patients without prostate cancer. Three promising exosomal microRNA biomarkers were identified, discriminating these groups: hsa-miR375, hsa-miR21, hsa-miR574.
A simple optimized microwave digestion method for multielement monitoring in mussel samples
NASA Astrophysics Data System (ADS)
Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.
2004-04-01
With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.
Gostic, T; Klemenc, S; Stefane, B
2009-05-30
The pyrolysis behaviour of pure cocaine base as well as the influence of various additives was studied using conditions that are relevant to the smoking of illicit cocaine by humans. For this purpose an aerobic pyrolysis device was developed and the experimental conditions were optimized. In the first part of our study the optimization of some basic experimental parameters of the pyrolysis was performed, i.e., the furnace temperature, the sampling start time, the heating period, the sampling time, and the air-flow rate through the system. The second part of the investigation focused on the volatile products formed during the pyrolysis of a pure cocaine free base and mixtures of cocaine base and adulterants. The anaesthetics lidocaine, benzocaine, procaine, the analgesics phenacetine and paracetamol, and the stimulant caffeine were used as the adulterants. Under the applied experimental conditions complete volatilization of the samples was achieved, i.e., the residuals of the studied compounds were not detected in the pyrolysis cell. Volatilization of the pure cocaine base showed that the cocaine recovery available for inhalation (adsorbed on traps) was approximately 76%. GC-MS and NMR analyses of the smoke condensate revealed the presence of some additional cocaine pyrolytic products, such as anhydroecgonine methyl ester (AEME), benzoic acid (BA) and carbomethoxycycloheptatrienes (CMCHTs). Experiments with different cocaine-adulterant mixtures showed that the addition of the adulterants changed the thermal behaviour of the cocaine. The most significant of these was the effect of paracetamol. The total recovery of the cocaine (adsorbed on traps and in a glass tube) from the 1:1 cocaine-paracetamol mixture was found to be only 3.0+/-0.8%, versus 81.4+/-2.9% for the pure cocaine base. The other adulterants showed less-extensive effects on the recovery of cocaine, but the pyrolysis of the cocaine-procaine mixture led to the formation of some unique pyrolytic products
Vollmer, Tanja; Schottstedt, Volkmar; Bux, Juergen; Walther-Wenke, Gabriele; Knabbe, Cornelius; Dreier, Jens
2014-01-01
Background There is growing concern on the residual risk of bacterial contamination of platelet concentrates in Germany, despite the reduction of the shelf-life of these concentrates and the introduction of bacterial screening. In this study, the applicability of the BactiFlow flow cytometric assay for bacterial screening of platelet concentrates on day 2 or 3 of their shelf-life was assessed in two German blood services. The results were used to evaluate currently implemented or newly discussed screening strategies. Materials and methods Two thousand and ten apheresis platelet concentrates were tested on day 2 or day 3 after donation using BactiFlow flow cytometry. Reactive samples were confirmed by the BacT/Alert culture system. Results Twenty-four of the 2,100 platelet concentrates tested were reactive in the first test by BactiFlow. Of these 24 platelet concentrates, 12 were false-positive and the other 12 were initially reactive. None of the microbiological cultures of the initially reactive samples was positive. Parallel examination of 1,026 platelet concentrates by culture revealed three positive platelet concentrates with bacteria detected only in the anaerobic culture bottle and identified as Staphylococcus species. Two platelet concentrates were confirmed positive for Staphylcoccus epidermidis by culture. Retrospective analysis of the growth kinetics of the bacteria indicated that the bacterial titres were most likely below the diagnostic sensitivity of the BactiFlow assay (<300 CFU/mL) and probably had no transfusion relevance. Conclusions The BactiFlow assay is very convenient for bacterial screening of platelet concentrates independently of the testing day and the screening strategy. Although the optimal screening strategy could not be defined, this study provides further data to help achieve this goal. PMID:24887230
... narrow pathways. CSF is in constant production and absorption; it has a defined pathway from the lateral ... there is an imbalance of production and/or absorption. With most types of hydrocephalus, the fluid gets ...
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors
Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.
2013-01-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.
Acharya, Munjal M; Martirosian, Vahan; Christie, Lori-Ann; Riparip, Lara; Strnadel, Jan; Parihar, Vipan K; Limoli, Charles L
2015-01-01
Past preclinical studies have demonstrated the capability of using human stem cell transplantation in the irradiated brain to ameliorate radiation-induced cognitive dysfunction. Intrahippocampal transplantation of human embryonic stem cells and human neural stem cells (hNSCs) was found to functionally restore cognition in rats 1 and 4 months after cranial irradiation. To optimize the potential therapeutic benefits of human stem cell transplantation, we have further defined optimal transplantation windows for maximizing cognitive benefits after irradiation and used induced pluripotent stem cell-derived hNSCs (iPSC-hNSCs) that may eventually help minimize graft rejection in the host brain. For these studies, animals given an acute head-only dose of 10 Gy were grafted with iPSC-hNSCs at 2 days, 2 weeks, or 4 weeks following irradiation. Animals receiving stem cell grafts showed improved hippocampal spatial memory and contextual fear-conditioning performance compared with irradiated sham-surgery controls when analyzed 1 month after transplantation surgery. Importantly, superior performance was evident when stem cell grafting was delayed by 4 weeks following irradiation compared with animals grafted at earlier times. Analysis of the 4-week cohort showed that the surviving grafted cells migrated throughout the CA1 and CA3 subfields of the host hippocampus and differentiated into neuronal (∼39%) and astroglial (∼14%) subtypes. Furthermore, radiation-induced inflammation was significantly attenuated across multiple hippocampal subfields in animals receiving iPSC-hNSCs at 4 weeks after irradiation. These studies expand our prior findings to demonstrate that protracted stem cell grafting provides improved cognitive benefits following irradiation that are associated with reduced neuroinflammation.
2015-01-01
criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart
2010-12-01
A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer.
Bedford, Jennifer L; Barr, Susan I
2005-04-13
BACKGROUND: Few population-based studies of vegetarians have been published. Thus we compared self-reported vegetarians to non-vegetarians in a representative sample of British Columbia (BC) adults, weighted to reflect the BC population. METHODS: Questionnaires, 24-hr recalls and anthropometric measures were completed during in-person interviews with 1817 community-dwelling residents, 19-84 years, recruited using a population-based health registry. Vegetarian status was self-defined. ANOVA with age as a covariate was used to analyze continuous variables, and chi-square was used for categorical variables. Supplement intakes were compared using the Mann-Whitney test. RESULTS: Approximately 6% (n = 106) stated that they were vegetarian, and most did not adhere rigidly to a flesh-free diet. Vegetarians were more likely female (71% vs. 49%), single, of low-income status, and tended to be younger. Female vegetarians had lower BMI than non-vegetarians (23.1 +/- 0.7 (mean +/- SE) vs. 25.7 +/- 0.2 kg/m2), and also had lower waist circumference (75.0 +/- 1.5 vs. 79.8 +/- 0.5 cm). Male vegetarians and non-vegetarians had similar BMI (25.9 +/- 0.8 vs. 26.7 +/- 0.2 kg/m2) and waist circumference (92.5 +/- 2.3 vs. 91.7 +/- 0.4 cm). Female vegetarians were more physically active (69% vs. 42% active >/=4/wk) while male vegetarians were more likely to use nutritive supplements (71% vs. 51%). Energy intakes were similar, but vegetarians reported higher % energy as carbohydrate (56% vs. 50%), and lower % protein (men only; 13% vs. 17%) or % fat (women only; 27% vs. 33%). Vegetarians had higher fiber, magnesium and potassium intakes. For several other nutrients, differences by vegetarian status differed by gender. The prevalence of inadequate magnesium intake (% below Estimated Average Requirement) was lower in vegetarians than non-vegetarians (15% vs. 34%). Female vegetarians also had a lower prevalence of inadequate thiamin, folate, vitamin B6 and C intakes. Vegetarians were more
Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K
2016-01-01
Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions.
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
probability in design optimization problems. The buffered failure probability is more conservative and possesses properties that make it more...The buffered failure probability is more conservative and possesses properties that make it more convenient to compute and optimize. Since a failure
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Crook, D J; Fierke, M K; Mauromoustakos, A; Kinney, D L; Stephen, F M
2007-06-01
In the Ozark Mountains of northern Arkansas and southern Missouri, an oak decline event, coupled with epidemic populations of red oak borer (Enaphalodes rufulus Haldeman), has resulted in extensive red oak (Quercus spp., section Lobatae) mortality. Twenty-four northern red oak trees, Quercus rubra L., infested with red oak borer, were felled in the Ozark National Forest between March 2002 and June 2003. Infested tree boles were cut into 0.5-m sample bolts, and the following red oak borer population variables were measured: current generation galleries, live red oak borer, emergence holes, and previous generation galleries. Population density estimates from sampling plans using varying numbers of samples taken randomly and systematically were compared with total census measurements for the entire infested tree bole. Systematic sampling consistently yielded lower percent root mean square error (%RMSE) than random sampling. Systematic sampling of one half of the tree (every other 0.5-m sample along the tree bole) yielded the lowest values. Estimates from plans systematically sampling one half the tree and systematic proportional sampling using seven or nine samples did not differ significantly from each other and were within 25% RMSE of the "true" mean. Thus, we recommend systematically removing and dissecting seven 0.5-m samples from infested trees as an optimal sampling plan for monitoring red oak borer within-tree population densities. This optimal sampling plan should allow for collection of acceptably accurate within-tree population density data for this native wood-boring insect and reducing labor and costs of dissecting whole trees.
Hunt, Brian R.; Ott, Edward
2015-09-15
In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-11-25
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD).
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Thomson, Amara C; Ramos, Joyce S; Fassett, Robert G; Coombes, Jeff S; Dalleck, Lance C
2015-01-01
This study sought to determine the optimal criteria and sampling interval to detect a V̇O2 plateau at V̇O2max in patients with metabolic syndrome. Twenty-three participants with criteria-defined metabolic syndrome underwent a maximal graded exercise test. Four different sampling intervals and three different V̇O2 plateau criteria were analysed to determine the effect of each parameter on the incidence of V̇O2 plateau at V̇O2max. Seventeen tests were classified as maximal based on attainment of at least two out of three criteria. There was a significant (p < 0.05) effect of 15-breath (b) sampling interval on the incidence of V̇O2 plateau at V̇O2max across the ≤ 50 and ≤ 80 mL ∙ min(-1) conditions. Strength of association was established by the Cramer's V statistic (φc); (≤ 50 mL ∙ min(-1) [φc = 0.592, p < 0.05], ≤ 80 mL ∙ min(-1) [φc = 0.383, p < 0.05], ≤ 150 mL ∙ min(-1) [φc = 0.246, p > 0.05]). When conducting maximal stress tests on patients with metabolic syndrome, a 15-b sampling interval and ≤ 50 mL ∙ min(-1) criteria should be implemented to increase the likelihood of detecting V̇O2 plateau at V̇O2max.
Pooler, P.S.; Smith, D.R.
2005-01-01
We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.
Lueck, Sonja C.; Russ, Annika C.; Botzenhardt, Ursula; Schlenk, Richard F.; Zobel, Kerry; Deshayes, Kurt; Vucic, Domagoj; Döhner, Hartmut; Döhner, Konstanze
2016-01-01
Apoptosis is deregulated in most, if not all, cancers, including hematological malignancies. Smac mimetics that antagonize Inhibitor of Apoptosis (IAP) proteins have so far largely been investigated in acute myeloid leukemia (AML) cell lines; however, little is yet known on the therapeutic potential of Smac mimetics in primary AML samples. In this study, we therefore investigated the antileukemic activity of the Smac mimetic BV6 in diagnostic samples of 67 adult AML patients and correlated the response to clinical, cytogenetic and molecular markers and gene expression profiles. Treatment with cytarabine (ara-C) was used as a standard chemotherapeutic agent. Interestingly, about half (51%) of primary AML samples are sensitive to BV6 and 21% intermediate responsive, while 28% are resistant. Notably, 69% of ara-C-resistant samples show a good to fair response to BV6. Furthermore, combination treatment with ara-C and BV6 exerts additive effects in most samples. Whole-genome gene expression profiling identifies cell death, TNFR1 and NF-κB signaling among the top pathways that are activated by BV6 in BV6-sensitive, but not in BV6-resistant cases. Furthermore, sensitivity of primary AML blasts to BV6 correlates with significantly elevated expression levels of TNF and lower levels of XIAP in diagnostic samples, as well as with NPM1 mutation. In a large set of primary AML samples, these data provide novel insights into factors regulating Smac mimetic response in AML and have important implications for the development of Smac mimetic-based therapies and related diagnostics in AML. PMID:27385100
Zimmerman, Marc J.; Massey, Andrew J.; Campo, Kimberly W.
2005-01-01
During four periods from April 2002 to June 2003, pore-water samples were taken from river sediment within a gaining reach (Mill Pond) of the Sudbury River in Ashland, Massachusetts, with a temporary pushpoint sampler to determine whether this device is an effective tool for measuring small-scale spatial variations in concentrations of volatile organic compounds and selected field parameters (specific conductance and dissolved oxygen concentration). The pore waters sampled were within a subsurface plume of volatile organic compounds extending from the nearby Nyanza Chemical Waste Dump Superfund site to the river. Samples were collected from depths of 10, 30, and 60 centimeters below the sediment surface along two 10-meter-long, parallel transects extending into the river. Twenty-five volatile organic compounds were detected at concentrations ranging from less than 1 microgram per liter to hundreds of micrograms per liter (for example, 1,2-dichlorobenzene, 490 micrograms per liter; cis-1,2-dichloroethene, 290 micrograms per liter). The most frequently detected compounds were either chlorobenzenes or chlorinated ethenes. Many of the compounds were detected only infrequently. Quality-control sampling indicated a low incidence of trace concentrations of contaminants. Additional samples collected with passive-water-diffusion-bag samplers yielded results comparable to those collected with the pushpoint sampler and to samples collected in previous studies at the site. The results demonstrate that the pushpoint sampler can yield distinct samples from sites in close proximity; in this case, sampling sites were 1 meter apart horizontally and 20 or 30 centimeters apart vertically. Moreover, the pushpoint sampler was able to draw pore water when inserted to depths as shallow as 10 centimeters below the sediment surface without entraining surface water. The simplicity of collecting numerous samples in a short time period (routinely, 20 to 30 per day) validates the use of a
Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y
2011-12-01
There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample.
NASA Astrophysics Data System (ADS)
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
Technology Transfer Automated Retrieval System (TEKTRAN)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania
2006-01-01
Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.
Aubier, Thomas G; Sherratt, Thomas N
2015-11-01
The convergent evolution of warning signals in unpalatable species, known as Müllerian mimicry, has been observed in a wide variety of taxonomic groups. This form of mimicry is generally thought to have arisen as a consequence of local frequency-dependent selection imposed by sampling predators. However, despite clear evidence for local selection against rare warning signals, there appears an almost embarrassing amount of polymorphism in natural warning colors, both within and among populations. Because the model of predator cognition widely invoked to explain Müllerian mimicry (Müller's "fixed n(k)" model) is highly simplified and has not been empirically supported; here, we explore the dynamical consequences of the optimal strategy for sampling unfamiliar prey. This strategy, based on a classical exploration-exploitation trade-off, not only allows for a variable number of prey sampled, but also accounts for predator neophobia under some conditions. In contrast to Müller's "fixed n(k)" sampling rule, the optimal sampling strategy is capable of generating a variety of dynamical outcomes, including mimicry but also regional and local polymorphism. Moreover, the heterogeneity of predator behavior across space and time that a more nuanced foraging strategy allows, can even further facilitate the emergence of both local and regional polymorphism in prey warning color.
O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A
2014-10-01
Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring.
OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER
The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Optimization of sample preparation for accurate results in quantitative NMR spectroscopy
NASA Astrophysics Data System (ADS)
Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi
2017-04-01
Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.
Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David
2016-01-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.
Prévot, V; Tweepenninckx, F; Van Nerom, E; Linden, A; Content, J; Kimpe, A
2007-01-01
Botulism is a rare but serious paralytic illness caused by a nerve toxin that is produced by the bacterium Clostridium botulinum. The economic, medical and alimentary consequences can be catastrophic in case of an epizooty. A polymerase chain reaction (PCR)-based assay was developed for the detection of C. botulinum toxigenic strains type C and D in bovine samples. This assay has proved to be less expensive, faster and simpler to use than the mouse bioassay, the current reference method for diagnosis of C. botulinum toxigenic strains. Three pairs of primers were designed, one for global detection of C. botulinum types C and D (primer pair Y), and two strain-specific pairs specifically designed for types C (primer pair VC) and D (primer pair VD). The PCR amplification conditions were optimized and evaluated on 13 bovine and two duck samples that had been previously tested by the mouse bioassay. In order to assess the impact of sample treatment, both DNA extracted from crude samples and three different enrichment broths (TYG, CMM, CMM followed by TYG) were tested. A 100% sensitivity was observed when samples were enriched for 5 days in CMM followed by 1 day in TYG broth. False-negative results were encountered when C. botulinum was screened for in crude samples. These findings indicate that the current PCR is a reliable method for the detection of C. botulinum toxigenic strains type C and D in bovine samples but only after proper enrichment in CMM and TYG broth.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Dynamically optimized Wang-Landau sampling with adaptive trial moves and modification factors.
Koh, Yang Wei; Lee, Hwee Kuan; Okabe, Yutaka
2013-11-01
The density of states of continuous models is known to span many orders of magnitudes at different energies due to the small volume of phase space near the ground state. Consequently, the traditional Wang-Landau sampling which uses the same trial move for all energies faces difficulties sampling the low-entropic states. We developed an adaptive variant of the Wang-Landau algorithm that very effectively samples the density of states of continuous models across the entire energy range. By extending the acceptance ratio method of Bouzida, Kumar, and Swendsen such that the step size of the trial move and acceptance rate are adapted in an energy-dependent fashion, the random walker efficiently adapts its sampling according to the local phase space structure. The Wang-Landau modification factor is also made energy dependent in accordance with the step size, enhancing the accumulation of the density of states. Numerical simulations show that our proposed method performs much better than the traditional Wang-Landau sampling.
Deng, Meihong; Kleinert, Robert; Huang, Hai; He, Qing; Madrahimova, Fotima; Dirsch, Olaf; Dahmen, Uta
2009-01-01
Quantification of liver regeneration is frequently based on determining the 5-bromo-2-deoxyuridine labeling index (BrdU-LI). The quantitative result is influenced by preanalytical, analytical, and postanalytical variables such as the region of interest (ROI). We aimed to present our newly developed and validated automatic computer-based image analysis system (AnalySIS-Macro), and to standardize the selection and sample size of ROIs. Images from BrdU-labeled and immunohistochemically stained liver sections were analyzed conventionally and with the newly developed AnalySIS-Macro and used for validation of the system. Automatic quantification correlated well with the manual counting result (r=0.9976). Validation of our AnalySIS-Macro revealed its high sensitivity (>90%) and specificity. The BrdU-LI ranged from 11% to 57% within the same liver (32.96 ± 11.94%), reflecting the highly variable spatial distribution of hepatocyte proliferation. At least 2000 hepatocytes (10 images at 200× magnification) per lobe were required as sample size for achieving a representative BrdU-LI. Furthermore, the number of pericentral areas should be equal to that of periportal areas. The combination of our AnalySIS-Macro with rules for the selection and size of ROIs represents an accurate, sensitive, specific, and efficient diagnostic tool for the determination of the BrdU-LI and the spatial distribution of proliferating hepatocytes. (J Histochem Cytochem 57:1075–1085, 2009) PMID:19620322
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats
Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579
Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.
Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.
Fillers, W Steven
2004-12-01
Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system.
Trojanowski, S.; Ciszek, M.
2009-10-15
In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.
Trojanowski, S; Ciszek, M
2009-10-01
In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju
2016-01-01
An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.
2008-12-19
The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1976-01-01
Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.
Optimal Sampling Efficiency in Monte Carlo Simulation With an Approximate Potential
2009-02-01
Boltzmann sampling of an approximate potential (the “reference” system) is used to build a Markov chain in the isothermal - isobaric ensemble. At the end...in the isothermal - isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level the “full” system and a...1pn. 7 In the isothermal - isobaric ensemble,30 for which the corre- sponding potential is the Gibbs free energy,31 Wi = − Ui + PVi + N ln Vi
Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao
2016-12-01
Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.
Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-04-28
Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.
Anderson, R.; Christensen, C.; Horowitz, S.
2006-08-01
An optimization method based on the evaluation of a broad range of different combinations of specific energy efficiency and renewable-energy options is used to determine the least-cost pathway to the development of new homes with zero peak cooling demand. The optimization approach conducts a sequential search of a large number of possible option combinations and uses the most cost-effective alternatives to generate a least-cost curve to achieve home-performance levels ranging from a Title 24-compliant home to a home that uses zero net source energy on an annual basis. By evaluating peak cooling load reductions on the least-cost curve, it is then possible to determine the most cost-effective combination of energy efficiency and renewable-energy options that both maximize annual energy savings and minimize peak-cooling demand.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan
2014-05-01
Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method
Vinks, A A; Mouton, J W; Touw, D J; Heijerman, H G; Danhof, M; Bakker, W
1996-01-01
Postinfusion data obtained from 17 patients with cystic fibrosis participating in two clinical trials were used to develop population models for ceftazidime pharmacokinetics during continuous infusion. Determinant (D)-optimal sampling strategy (OSS) was used to evaluate the benefits of merging four maximally informative sampling times with population modeling. Full and sparse D-optimal sampling data sets were analyzed with the nonparametric expectation maximization (NPEM) algorithm and compared with the model obtained by the traditional standard two-stage approach. Individual pharmacokinetic parameter estimates were calculated by weighted nonlinear least-squares regression and by maximum a posteriori probability Bayesian estimator. Individual parameter estimates obtained with four D-optimally timed serum samples (OSS4) showed excellent correlation with parameter estimates obtained by using full data sets. The parameters of interest, clearance and volume of distribution, showed excellent agreement (R2 = 0.89 and R2 = 0.86). The ceftazidime population models were described as two-compartment kslope models, relating elimination constants to renal function. The NPEM-OSS4 model was described by the equations kel = 0.06516+ (0.00708.CLCR) and V1 = 0.1773 +/- 0.0406 liter/kg where CLCR is creatinine clearance in milliliters per minute per 1.73 m2, V1 is the volume of distribution of the central compartment, and kel is the elimination rate constant. Predictive performance evaluation for 31 patients with data which were not part of the model data sets showed that the NPEM-ALL model performed best, with significantly better precision than that of the standard two-stage model (P < 0.001). Predictions with the NPEM-OSS4 model were as precise as those with the NPEM-ALL model but slightly biased (-2.2 mg/liter; P < 0.01). D-optimal monitoring strategies coupled with population modeling results in useful and cost-effective population models and will be of advantage in clinical
NASA Astrophysics Data System (ADS)
Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.
2008-12-01
In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.
Csikai, J; Dóczi, R
2009-01-01
The advantages and limitations of epithermal neutrons in qualification of hydrocarbons via their H contents and C/H atomic ratios have been investigated systematically. Sensitivity of this method and the dimensions of the interrogated regions were determined for various types of hydrogenous samples. Results clearly demonstrate the advantages of direct neutron detection, e.g. by BF(3) counters as compared to the foil activation method in addition to using the hardness of the spectral shape of Pu-Be neutrons to that from a (252)Cf source.
Optimization of clamped beam geometry for fracture toughness testing of micron-scale samples
NASA Astrophysics Data System (ADS)
Nagamani Jaya, B.; Bhowmick, Sanjit; Syed Asif, S. A.; Warren, Oden L.; Jayaram, Vikram
2015-06-01
Fracture toughness measurements at the small scale have gained prominence over the years due to the continuing miniaturization of structural systems. Measurements carried out on bulk materials cannot be extrapolated to smaller length scales either due to the complexity of the microstructure or due to the size and geometric effect. Many new geometries have been proposed for fracture property measurements at small-length scales depending on the material behaviour and the type of device used in service. In situ testing provides the necessary environment to observe fracture at these length scales so as to determine the actual failure mechanism in these systems. In this paper, several improvements are incorporated to a previously proposed geometry of bending a doubly clamped beam for fracture toughness measurements. Both monotonic and cyclic loading conditions have been imposed on the beam to study R-curve and fatigue effects. In addition to the advantages that in situ SEM-based testing offers in such tests, FEM has been used as a simulation tool to replace cumbersome and expensive experiments to optimize the geometry. A description of all the improvements made to this specific geometry of clamped beam bending to make a variety of fracture property measurements is given in this paper.
Optimal media for use in air sampling to detect cultivable bacteria and fungi in the pharmacy.
Weissfeld, Alice S; Joseph, Riya Augustin; Le, Theresa V; Trevino, Ernest A; Schaeffer, M Frances; Vance, Paula H
2013-10-01
Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl.
Design Of A Sorbent/desorbent Unit For Sample Pre-treatment Optimized For QMB Gas Sensors
Pennazza, G.; Cristina, S.; Santonico, M.; Martinelli, E.; Di Natale, C.; D'Amico, A.; Paolesse, R.
2009-05-23
Sample pre-treatment is a typical procedure in analytical chemistry aimed at improving the performance of analytical systems. In case of gas sensors sample pre-treatment systems are devised to overcome sensors limitations in terms of selectivity and sensitivity. For this purpose, systems based on adsorption and desorption processes driven by temperature conditioning have been illustrated. The involvement of large temperature ranges may pose problems when QMB gas sensors are used. In this work a study of such influences on the overall sensing properties of QMB sensors are illustrated. The results allowed the design of a pre-treatment unit coupled with a QMB gas sensors array optimized to operate in a suitable temperatures range. The performance of the system are illustrated by the partially separation of water vapor in a gas mixture, and by substantial improvement of the signal to noise ratio.
Amaro, Rosa; Murillo, Miguel; González, Zurima; Escalona, Andrés; Hernández, Luís
2009-01-01
The treatment of wheat samples was optimized before the determination of phytic acid by high-performance liquid chromatography with refractive index detection. Drying by lyophilization and oven drying were studied; drying by lyophilization gave better results, confirming that this step is critical in preventing significant loss of analyte. In the extraction step, washing of the residue and collection of this water before retention of the phytates in the NH2 Sep-Pak cartridge were important. The retention of phytates in the NH2 Sep-Pak cartridge and elimination of the HCI did not produce significant loss (P = 0.05) in the phytic acid content of the sample. Recoveries of phytic acid averaged 91%, which is a substantial improvement with respect to values reported by others using this methodology.
Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M
2017-03-08
Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0-2 points/question. A combinations algorithm was developed to assess street segments' representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score(®), a validated neighborhood walkability measure. Street segment quality scores ranged 10-47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172-475 (Mean = 352.3 ± 63.6). Walk scores(®) ranged 0-91 (Mean = 46.7 ± 26.3). Street segment combinations' correlation coefficients ranged 0.75-1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores(®) (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.
Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.
2017-01-01
Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes. PMID:28282878
Comelli, Raúl N; Seluy, Lisandro G; Isla, Miguel A
2016-01-25
In bioethanol production processes, the media composition has an impact on product concentration, yields and the overall process economics. The main purpose of this research was to develop a low-cost mineral-based supplement for successful alcoholic fermentation in an attempt to provide an economically feasible alternative to produce bioethanol from novel sources, for example, sugary industrial wastewaters. Statistical experimental designs were used to select essential nutrients for yeast fermentation, and its optimal concentrations were estimated by Response Surface Methodology. Fermentations were performed on synthetic media inoculated with 2.0 g L(-1) of yeast, and the evolution of biomass, sugar, ethanol, CO2 and glycerol were monitored over time. A mix of salts [10.6 g L(-1) (NH4)2HPO4; 6.4 g L(-1) MgSO4·7H2O and 7.5 mg L(-1) ZnSO4·7H2O] was found to be optimal. It led to the complete fermentation of the sugars in less than 12h with an average ethanol yield of 0.42 g ethanol/g sugar. A general C-balance indicated that no carbonaceous compounds different from biomass, ethanol, CO2 or glycerol were produced in significant amounts in the fermentation process. Similar results were obtained when soft drink wastewaters were tested to evaluate the potential industrial application of this supplement. The ethanol yields were very close to those obtained when yeast extract was used as the supplement, but the optimized mineral-based medium is six times cheaper, which favorably impacts the process economics and makes this supplement more attractive from an industrial viewpoint.
Defining Tiger Parenting in Chinese Americans.
Kim, Su Yeong
2013-09-01
"Tiger" parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua's own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents.
Defining Tiger Parenting in Chinese Americans
Kim, Su Yeong
2016-01-01
“Tiger” parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua’s own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents. PMID:27182075
Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R
2008-08-01
The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.
Chen, DI-WEN
2001-11-21
Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information
Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911
Zhumadilov, Kassym; Ivannikov, Alexander; Skvortsov, Valeriy; Stepanenko, Valeriy; Zhumadilov, Zhaxybay; Endo, Satoru; Tanaka, Kenichi; Hoshi, Masaharu
2005-12-01
In order to improve the accuracy of the tooth enamel EPR dosimetry method, EPR spectra recording conditions were optimized. The uncertainty of dose determination was obtained as the mean square deviation of doses, determined with the use of a spectra deconvolution program, from the nominal doses for ten enamel samples irradiated in the range from 0 to 500 mGy. The spectra were recorded at different microwave powers and accumulation times. It was shown that minimal uncertainty is achieved at the microwave power of about 2 mW for a used spectrometer JEOL JES-FA100. It was found that a limit of the accumulation time exists beyond which uncertainty reduction is ineffective. At an established total time of measurement, reduced uncertainty is obtained by averaging the experimental doses determined from recorded spectra following intermittent sample shaking and sample tube rotation, rather than from one spectrum recorded at longer accumulation time. The effect of sample mass on the spectrometer's sensitivity was investigated in order to find out how to make appropriate corrections.
Smiley Evans, Tierra; Barry, Peter A; Gilardi, Kirsten V; Goldstein, Tracey; Deere, Jesse D; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R; Wolking, David; Smith, Brett; Mazet, Jonna A K; Johnson, Christine K
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible.
Sanz, C; Ansorena, D; Bello, J; Cid, C
2001-03-01
Equilibration time and temperature were the factors studied to choose the best conditions for analyzing volatiles in roasted ground Arabica coffee by a static headspace sampling extraction method. Three temperatures of equilibration were studied: 60, 80, and 90 degrees C. A larger quantity of volatile compounds was extracted at 90 degrees C than at 80 or 60 degrees C, although the same qualitative profile was found for each. The extraction of the volatile compounds was studied at seven different equilibration times: 30, 45, 60, 80, 100, 120, and 150 min. The best time of equilibration for headspace analysis of roasted ground Arabica coffee should be selected depending on the chemical class or compound studied. One hundred and twenty-two volatile compounds were identified, including 26 furans, 20 ketones, 20 pyrazines, 9 alcohols, 9 aldehydes, 8 esters, 6 pyrroles, 6 thiophenes, 4 sulfur compounds, 3 benzenic compounds, 2 phenolic compounds, 2 pyridines, 2 thiazoles, 1 oxazole, 1 lactone, 1 alkane, 1 alkene, and 1 acid.
NASA Astrophysics Data System (ADS)
Taniai, G.; Oda, H.; Kurihara, M.; Hashimoto, S.
2010-12-01
Halogenated volatile organic compounds (HVOCs) produced in the marine environment are thought to play a key role in atmospheric reactions, particularly those involved in the global radiation budget and the depression of tropospheric and stratospheric ozone. To evaluate HVOCs concentrations in the various natural samples, we developed an automated dynamic headspace extraction method for the determination of 15 HVOCs, such as chloromethane, bromomethane, bromoethane, iodomethane, iodoethane, bromochloromethane, 1-iodopropane, 2-iodopropane, dibromomethane, bromodichloromethane, chloroiodomethane, chlorodibromomethane, bromoiodomethane, tribromomethane, and diiodomethane. Dynamic headspace system (GERSTEL DHS) was used to purge the gas phase above samples and to trap HVOCs on the adsorbent column from the purge gas. We measured the HVOCs concentrations in the adsorbent column with gas chromatograph (Agilent 6890N)- mass spectrometer (Agilent 5975C). In dynamic headspace system, an glass tube containing Tenax TA or Tenax GR was used as adsorbent column for the collection of 15 HVOCs. The parameters for purge and trap extraction, such as purge flow rate (ml/min), purge volume (ml), incubation time (min), and agitator speed (rpm), were optimized. The detection limits of HVOCs in water samples were 1270 pM (chloromethane), 103 pM (bromomethane), 42.1 pM (iodomethane), and 1.4 to 10.2 pM (other HVOCs). The repeatability (relative standard deviation) for 15 HVOCs were < 9 % except chloromethane (16.2 %) and bromomethane (11.0 %). On the basis of the measurements for various samples, we concluded that this analytical method is useful for the determination of wide range of HVOCs with boiling points between - 24°C (chloromethane) and 181°C (diiodomethane) for the liquid or viscous samples.
Eblé, P L; Orsel, K; van Hemert-Kluitenberg, F; Dekker, A
2015-05-15
We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2 pen-mates. Viral excretion was detected for a long period (>21 days post-inoculation, dpi). Transmission of FMDV occurred in the non-vaccinated groups (R0=1.14) but only in the first week after infection, when virus shedding was highest. In the vaccinated groups no transmission occurred (Rv<1, p=0.013). The viral excretion of the vaccinated sheep and the viral load in their pens was significantly lower than that of the non-vaccinated sheep. FMDV could be detected in plasma samples from 12 of 17 infected non-vaccinated sheep, for an average of 2.1 days, but in none of the 10 infected vaccinated sheep. In contrast, FMDV could readily be isolated from mouth swab samples from both non-vaccinated and vaccinated infected sheep starting at 1-3 dpi and in 16 of 27 infected sheep up till 21 dpi. Serologically, after 3-4 weeks, all but one of the infected sheep were detected using the NS-ELISA. We conclude that vaccination of a sheep population would likely stop an epidemic of FMDV and that the use of mouth swab samples would be a good alternative (instead of using vesicular lesions or blood samples) to detect an FMD infection in a sheep population both early and prolonged after infection.
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Wang, Man-Juing; Tsai, Chih-Hsin; Hsu, Wei-Ya; Liu, Ju-Tsung; Lin, Cheng-Huang
2009-02-01
The optimal separation conditions and online sample concentration for N,N-dimethyltryptamine (DMT) and related compounds, including alpha-methyltryptamine (AMT), 5-methoxy-AMT (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DiPT), 5-methoxy-DMT (5-MeO-DMT), and 5-methoxy-N,N-DiPT (5-MeO-DiPT), using micellar EKC (MEKC) with UV-absorbance detection are described. The LODs (S/N = 3) for MEKC ranged from 1.0 1.8 microg/mL. Use of online sample concentration methods, including sweeping-MEKC and cation-selective exhaustive injection-sweep-MEKC (CSEI-sweep-MEKC) improved the LODs to 2.2 8.0 ng/mL and 1.3 2.7 ng/mL, respectively. In addition, the order of migration of the nine tryptamines was investigated. A urine sample, obtained by spiking urine collected from a human volunteer with DMT, was also successfully examined.
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-01-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e. equivalent to ~ 8 ng of amino sugar carbon. Our results obtained from δ13C analysis of amino sugars in selected marine sediment samples showed that muramic acid had isotopic imprints from indigenous bacterial activities, whereas glucosamine and galactosamine were mainly derived from organic detritus. The analysis of stable carbon isotopic compositions of amino sugars opens a promising window for the investigation of microbial metabolisms in marine sediments and the deep marine biosphere.
Zumla, Alimuddin; Saeed, Abdulaziz Bin; Alotaibi, Badriah; Yezli, Saber; Dar, Osman; Bieh, Kingsley; Bates, Matthew; Tayeb, Tamara; Mwaba, Peter; Shafi, Shuja; McCloskey, Brian; Petersen, Eskild; Azhar, Esam I
2016-06-01
Tuberculosis (TB) is now the most common infectious cause of death worldwide. In 2014, an estimated 9.6 million people developed active TB. There were an estimated three million people with active TB including 360000 with multidrug-resistant TB (MDR-TB) who were not diagnosed, and such people continue to fuel TB transmission in the community. Accurate data on the actual burden of TB and the transmission risk associated with mass gatherings are scarce and unreliable due to the small numbers studied and methodological issues. Every year, an estimated 10 million pilgrims from 184 countries travel to the Kingdom of Saudi Arabia (KSA) to perform the Hajj and Umrah pilgrimages. A large majority of pilgrims come from high TB burden and MDR-TB endemic areas and thus many may have undiagnosed active TB, sub-clinical TB, and latent TB infection. The Hajj pilgrimage provides unique opportunities for the KSA and the 184 countries from which pilgrims originate, to conduct high quality priority research studies on TB under the remit of the Global Centre for Mass Gatherings Medicine. Research opportunities are discussed, including those related to the definition of the TB burden, transmission risk, and the optimal surveillance, prevention, and control measures at the annual Hajj pilgrimage. The associated data are required to develop international recommendations and guidelines for TB management and control at mass gathering events.
Kim, Tae-Won; Lee, Je-Hwan; Lee, Jung-Hee; Ahn, Jin-Hee; Kang, Yoon-Koo; Lee, Kyoo-Hyung; Yu, Chang-Sik; Kim, Jong-Hoon; Ahn, Seung-Do; Kim, Woo-Kun; Kim, Jin-Cheon; Lee, Jung-Shin
2011-11-15
Purpose: To determine the optimal sequence of postoperative adjuvant chemotherapy and radiotherapy in patients with Stage II or III rectal cancer. Methods and Materials: A total of 308 patients were randomized to early (n = 155) or late (n = 153) radiotherapy (RT). Treatment included eight cycles of chemotherapy, consisting of fluorouracil 375 mg/m{sup 2}/day and leucovorin 20 mg/m{sup 2}/day, at 4-week intervals, and pelvic radiotherapy of 45 Gy in 25 fractions. Radiotherapy started on Day 1 of the first chemotherapy cycle in the early RT arm and on Day 1 of the third chemotherapy cycle in the late RT arm. Results: At a median follow-up of 121 months for surviving patients, disease-free survival (DFS) at 10 years was not statistically significantly different between the early and late RT arms (71% vs. 63%; p = 0.162). A total of 36 patients (26.7%) in the early RT arm and 49 (35.3%) in the late RT arm experienced recurrence (p = 0.151). Overall survival did not differ significantly between the two treatment groups. However, in patients who underwent abdominoperineal resection, the DFS rate at 10 years was significantly greater in the early RT arm than in the late RT arm (63% vs. 40%; p = 0.043). Conclusions: After the long-term follow-up duration, this study failed to show a statistically significant DFS advantage for early radiotherapy with concurrent chemotherapy after resection of Stage II and III rectal cancer. Our results, however, suggest that if neoadjuvant chemoradiation is not given before surgery, then early postoperative chemoradiation should be considered for patients requiring an abdominoperineal resection.
Yu, Yuqi; Wang, Jinan; Shao, Qiang E-mail: Jiye.Shi@ucb.com Zhu, Weiliang E-mail: Jiye.Shi@ucb.com; Shi, Jiye E-mail: Jiye.Shi@ucb.com
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles
2014-01-15
The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution.
Lee, Jae Hwan; Jia, Chunrong; Kim, Yong Doo; Kim, Hong Hyun; Pham, Tien Thang; Choi, Young Seok; Seo, Young Un; Lee, Ike Woo
2012-01-01
Trimethylsilanol (TMSOH) can cause damage to surfaces of scanner lenses in the semiconductor industry, and there is a critical need to measure and control airborne TMSOH concentrations. This study develops a thermal desorption (TD)-gas chromatography (GC)-mass spectrometry (MS) method for measuring trace-level TMSOH in occupational indoor air. Laboratory method optimization obtained best performance when using dual-bed tube configuration (100 mg of Tenax TA followed by 100 mg of Carboxen 569), n-decane as a solvent, and a TD temperature of 300°C. The optimized method demonstrated high recovery (87%), satisfactory precision (<15% for spiked amounts exceeding 1 ng), good linearity (R2 = 0.9999), a wide dynamic mass range (up to 500 ng), low method detection limit (2.8 ng m−3 for a 20-L sample), and negligible losses for 3-4-day storage. The field study showed performance comparable to that in laboratory and yielded first measurements of TMSOH, ranging from 1.02 to 27.30 μg/m3, in the semiconductor industry. We suggested future development of real-time monitoring techniques for TMSOH and other siloxanes for better maintenance and control of scanner lens in semiconductor wafer manufacturing. PMID:22966229
Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.
Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J
2016-01-01
The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.
Beringer, Paul; Aminimanizani, Amir; Synold, Timothy; Scott, Christy
2002-04-01
High-dose ibuprofen therapy has demonstrated to slow deterioration in pulmonary function in children with cystic fibrosis with mild lung disease. Therapeutic drug monitoring has been recommended to maintain peak concentrations within the range of 50 to 100 mg/L to ensure efficacy. Current methods for dosage individualization are based on dose proportionality using visual inspection of the peak concentration; however, because of interpatient variability in the absorption of the various formulations this method may result in incorrect assessments of the peak concentration achieved. Maximum a posteriori Bayesian analysis (MAP-B) has proven to be a useful and precise method of individualizing the dose of aminoglycosides but requires a description of the structural model. In this study we performed parametric population modeling analysis on plasma concentrations of ibuprofen after single doses of 20 to 30-mg/kg tablet or suspension in children with cystic fibrosis. Patients evaluated in this study were part of a single dose pharmacokinetic study that has been published previously. A one-compartment model with first order absorption and a lag time best described the data. The pharmacokinetic parameters differed significantly depending on the formulation administered. D-optimal sampling times for the suspension and tablet formulations are 0, 0.25 to 0.5, 1, and 3 to 4 hours and 0, 0.25 to 0.5, 1 to 1.5, and 5 hours respectively. Use of MAP-B analysis performed with the 4 d-optimal sampling strategy resulted in accurate and precise estimates of the pharmacokinetic parameters when compared with maximum likelihood analysis using the complete plasma concentrations data set. Further studies are needed to evaluate the performance of these models and the impact on patient outcomes.
Cirugeda-Roldán, E M; Cuesta-Frau, D; Miró-Martínez, P; Oltra-Crespo, S; Vigil-Medina, L; Varela-Entrecanales, M
2014-05-01
This paper describes a new method to optimize the computation of the quadratic sample entropy (QSE) metric. The objective is to enhance its segmentation capability between pathological and healthy subjects for short and unevenly sampled biomedical records, like those obtained using ambulatory blood pressure monitoring (ABPM). In ABPM, blood pressure is measured every 20-30 min during 24h while patients undergo normal daily activities. ABPM is indicated for a number of applications such as white-coat, suspected, borderline, or masked hypertension. Hypertension is a very important clinical issue that can lead to serious health implications, and therefore its identification and characterization is of paramount importance. Nonlinear processing of signals by means of entropy calculation algorithms has been used in many medical applications to distinguish among signal classes. However, most of these methods do not perform well if the records are not long enough and/or not uniformly sampled. That is the case for ABPM records. These signals are extremely short and scattered with outliers or missing/resampled data. This is why ABPM Blood pressure signal screening using nonlinear methods is a quite unexplored field. We propose an additional stage for the computation of QSE independently of its parameter r and the input signal length. This enabled us to apply a segmentation process to ABPM records successfully. The experimental dataset consisted of 61 blood pressure data records of control and pathological subjects with only 52 samples per time series. The entropy estimation values obtained led to the segmentation of the two groups, while other standard nonlinear methods failed.
Peuchen, Elizabeth H; Sun, Liangliang; Dovichi, Norman J
2016-07-01
Xenopus laevis is an important model organism in developmental biology. While there is a large literature on changes in the organism's transcriptome during development, the study of its proteome is at an embryonic state. Several papers have been published recently that characterize the proteome of X. laevis eggs and early-stage embryos; however, proteomic sample preparation optimizations have not been reported. Sample preparation is challenging because a large fraction (~90 % by weight) of the egg or early-stage embryo is yolk. We compared three common protein extraction buffer systems, mammalian Cell-PE LB(TM) lysing buffer (NP40), sodium dodecyl sulfate (SDS), and 8 M urea, in terms of protein extraction efficiency and protein identifications. SDS extracts contained the highest concentration of proteins, but this extract was dominated by a high concentration of yolk proteins. In contrast, NP40 extracts contained ~30 % of the protein concentration as SDS extracts, but excelled in discriminating against yolk proteins, which resulted in more protein and peptide identifications. We then compared digestion methods using both SDS and NP40 extraction methods with one-dimensional reverse-phase liquid chromatography-tandem mass spectrometry (RPLC-MS/MS). NP40 coupled to a filter-aided sample preparation (FASP) procedure produced nearly twice the number of protein and peptide identifications compared to alternatives. When NP40-FASP samples were subjected to two-dimensional RPLC-ESI-MS/MS, a total of 5171 proteins and 38,885 peptides were identified from a single stage of embryos (stage 2), increasing the number of protein identifications by 23 % in comparison to other traditional protein extraction methods.
NASA Astrophysics Data System (ADS)
Robert, D.; Braud, I.; Cohard, J.; Zin, I.; Vauclin, M.
2010-12-01
Physically based hydrological models involve a large amount of parameters and data. Any of them is associated with uncertainties because of indirect measurements of some characteristics or because of spatial or temporal variability of others, …. Then, even if lots of data are measured in the field or in the laboratory, ignorance and uncertainty about data persist and a large degree of freedom remains for modeling. Moreover the choice for physical parameterization also induces uncertainties and errors in model behavior and simulation results. To address this problem, sensitivity analyses are useful. They allow the determination of the influence of each parameter on modeling results and allow the adjustment of an optimal parameter set by minimizing a cost function. However, the larger the number of parameters, the more expensive the computational costs to explore the whole parameter space. In this context, we carried out an original approach in the hydrology domain to perform this sensitivity analysis using a 1D Soil - Vegetation - Atmosphere Transfer model. The chosen method is a global method. It focuses on the output data variability due to the input parameter uncertainties. The latin hypercube sampling is adopted to sample the analyzed input parameter space. This method has the advantage to reduce the computational cost. The method is applied using the SiSPAT (Simple Soil Vegetation Atmosphere Transfer) model over a complete year period with observations collected in a small catchments in Benin, within the AMMA project. It involves sensitivity to 30 parameters sampled in 40 intervals. The quality of the modeled results is evaluated by calculating several criteria: the bias, the root mean square error and the Nash-Sutcliffe efficiency coefficient between modeled and observed time series of net radiation, heat fluxes, soil temperatures and volumetric water contents.... To hierarchize the influence of the various input parameters on the results, the study of
Fakanya, Wellington M.; Tothill, Ibtisam E.
2014-01-01
The development of an electrochemical immunosensor for the biomarker, C-reactive protein (CRP), is reported in this work. CRP has been used to assess inflammation and is also used in a multi-biomarker system as a predictive biomarker for cardiovascular disease risk. A gold-based working electrode sensor was developed, and the types of electrode printing inks and ink curing techniques were then optimized. The electrodes with the best performance parameters were then employed for the construction of an immunosensor for CRP by immobilizing anti-human CRP antibody on the working electrode surface. A sandwich enzyme-linked immunosorbent assay (ELISA) was then constructed after sample addition by using anti-human CRP antibody labelled with horseradish peroxidase (HRP). The signal was generated by the addition of a mediator/substrate system comprised of 3,3,5',5'-Tetramethylbenzidine dihydrochloride (TMB) and hydrogen peroxide (H2O2). Measurements were conducted using chronoamperometry at −200 mV against an integrated Ag/AgCl reference electrode. A CRP limit of detection (LOD) of 2.2 ng·mL−1 was achieved in spiked serum samples, and performance agreement was obtained with reference to a commercial ELISA kit. The developed CRP immunosensor was able to detect a diagnostically relevant range of the biomarker in serum without the need for signal amplification using nanoparticles, paving the way for future development on a cardiac panel electrochemical point-of-care diagnostic device. PMID:25587427
Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon
2016-10-01
Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR.
NASA Astrophysics Data System (ADS)
Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.
2014-09-01
Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.
Amorim, Fábio Alan Carqueija; Costa, Vinicius Câmara; Silva, Erik Galvão P da; Lima, Daniel de Castro; Jesus, Raildo Mota de; Bezerra, Marcos de Almeida
2017-07-15
A slurry sampling procedure has been developed for Fe and Mg determination in cassava starch using flame atomic absorption spectrometry. The optimization step was performed using a univariate methodology for 200mg samples and a multivariate methodology, using the Box-Behnken design, for other variables, such as solvent (HNO3:HCl), final concentration (1.7molL(-1)) and time (26min). This procedure allowed determination of iron and magnesium with detection limits of 1.01 and 3.36mgkg(-1), respectively. Precision, expressed as relative standard deviation (%RSD), was of 5.8 and 4.1% (n=10) for Fe (17.8mgkg(-1)) and Mg (64.5mgkg(-1)), respectively. Accuracy was confirmed by analysis of a standard reference material for wheat flour (NIST 1567a), which had certified concentrations of 14.1±0.5mgkg(-1) for Fe and 40±2.0mgkg(-1) for Mg, and the concentrations found using proposed method were 13.7±0.3mgkg(-1) for Fe and 40.8±1.5mgkg(-1) for Mg. Comparison with concentrations obtained using closed vessel microwave digestion was also realized. The concentrations obtained varied between 7.85 and 17.8mgkg(-1) for Fe and 23.7-64.5mgkg(-1), for Mg. The simplicity, easily, speed and satisfactory analytical characteristics indicate that the proposed analytical procedure is a good alternative for the determination of Fe and Mg in cassava starch samples.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS(2) experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10(5) in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Abdulra'uf, Lukman Bola; Sirhan, Ala Yahya; Tan, Guan Huat
2015-01-01
Sample preparation has been identified as the most important step in analytical chemistry and has been tagged as the bottleneck of analytical methodology. The current trend is aimed at developing cost-effective, miniaturized, simplified, and environmentally friendly sample preparation techniques. The fundamentals and applications of multivariate statistical techniques for the optimization of microextraction sample preparation and chromatographic analysis of pesticide residues are described in this review. The use of Placket-Burman, Doehlert matrix, and Box-Behnken designs are discussed. As observed in this review, a number of analytical chemists have combined chemometrics and microextraction techniques, which has helped to streamline sample preparation and improve sample throughput.
NASA Astrophysics Data System (ADS)
Guarieiro, Lílian Lefol Nani; Pereira, Pedro Afonso de Paula; Torres, Ednildo Andrade; da Rocha, Gisele Olimpio; de Andrade, Jailson B.
Biodiesel is emerging as a renewable fuel, hence becoming a promising alternative to fossil fuels. Biodiesel can form blends with diesel in any ratio, and thus could replace partially, or even totally, diesel fuel in diesel engines what would bring a number of environmental, economical and social advantages. Although a number of studies are available on regulated substances, there is a gap of studies on unregulated substances, such as carbonyl compounds, emitted during the combustion of biodiesel, biodiesel-diesel and/or ethanol-biodiesel-diesel blends. CC is a class of hazardous pollutants known to be participating in photochemical smog formation. In this work a comparison was carried out between the two most widely used CC collection methods: C18 cartridges coated with an acid solution of 2,4-dinitrophenylhydrazine (2,4-DNPH) and impinger bottles filled in 2,4-DNPH solution. Sampling optimization was performed using a 2 2 factorial design tool. Samples were collected from the exhaust emissions of a diesel engine with biodiesel and operated by a steady-state dynamometer. In the central body of factorial design, the average of the sum of CC concentrations collected using impingers was 33.2 ppmV but it was only 6.5 ppmV for C18 cartridges. In addition, the relative standard deviation (RSD) was 4% for impingers and 37% for C18 cartridges. Clearly, the impinger system is able to collect CC more efficiently, with lower error than the C18 cartridge system. Furthermore, propionaldehyde was nearly not sampled by C18 system at all. For these reasons, the impinger system was chosen in our study. The optimized sampling conditions applied throughout this study were: two serially connected impingers each containing 10 mL of 2,4-DNPH solution at a flow rate of 0.2 L min -1 during 5 min. A profile study of the C1-C4 vapor-phase carbonyl compound emissions was obtained from exhaust of pure diesel (B0), pure biodiesel (B100) and biodiesel-diesel mixtures (B2, B5, B10, B20, B50, B
Rogeberg, Magnus; Vehus, Tore; Grutle, Lene; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa
2013-09-01
The single-run resolving power of current 10 μm id porous-layer open-tubular (PLOT) columns has been optimized. The columns studied had a poly(styrene-co-divinylbenzene) porous layer (~0.75 μm thickness). In contrast to many previous studies that have employed complex plumbing or compromising set-ups, SPE-PLOT-LC-MS was assembled without the use of additional hardware/noncommercial parts, additional valves or sample splitting. A comprehensive study of various flow rates, gradient times, and column length combinations was undertaken. Maximum resolution for <400 bar was achieved using a 40 nL/min flow rate, a 400 min gradient and an 8 m long column. We obtained a 2.3-fold increase in peak capacity compared to previous PLOT studies (950 versus previously obtained 400, when using peak width = 2σ definition). Our system also meets or surpasses peak capacities obtained in recent reports using nano-ultra-performance LC conditions or long silica monolith nanocolumns. Nearly 500 proteins (1958 peptides) could be identified in just one single injection of an extract corresponding to 1000 BxPC3 beta catenin (-/-) cells, and ~1200 and 2500 proteins in extracts of 10,000 and 100,000 cells, respectively, allowing detection of central members and regulators of the Wnt signaling pathway.
NASA Astrophysics Data System (ADS)
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-01
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7 μg L-1was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.
Khajeh, Mostafa; Golzary, Ali Reza
2014-10-15
In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7μgL(-1)was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
NASA Astrophysics Data System (ADS)
Yan, Hongyong; Yang, Lei; Li, Xiang-Yang
2016-12-01
High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
2001-01-01
A closed-loop optimal neural-network controller technique was developed to optimize rotorcraft aeromechanical behaviour. This technique utilities a neural-network scheme to provide a general non-linear model of the rotorcraft. A modem constrained optimisation method is used to determine and update the constants in the neural-network plant model as well as to determine the optimal control vector. Current data is read, weighted, and added to a sliding data window. When the specified maximum number of data sets allowed in the data window is exceeded, the oldest data set is and the remaining data sets are re-weighted. This procedure provides at least four additional degrees-of-freedom in addition to the size and geometry of the neural-network itself with which to optimize the overall operation of the controller. These additional degrees-of-freedom are: 1. the maximum length of the sliding data window, 2. the frequency of neural-network updates, 3. the weighting of the individual data sets within the sliding window, and 4. the maximum number of optimisation iterations used for the neural-network updates.
Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J
2008-08-01
A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods.
ERIC Educational Resources Information Center
Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.
2015-01-01
Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…
Results from the NIST-EPA Interagency Agreement on Measurements and Standards in Aerosol Carbon: Sampling Regional PM2.5 for the Chemometric Optimization of Thermal-Optical Analysis Study will be presented at the American Association for Aerosol Research (AAAR) 24th Annual Confer...
Defining hypercalciuria in nephrolithiasis
Pak, Charles Y.C.; Sakhaee, Khashayar; Moe, Orson W.; Poindexter, John; Adams-Huet, Beverley
2014-01-01
The classic definition of hypercalciuria, an upper normal limit of 200 mg/day, is based on a constant diet restricted in calcium, sodium, and animal protein; however, random diet data challenge this. Here our retrospective study determined the validity of the classic definition of hypercalciuria by comparing data from 39 publications analyzing urinary calcium excretion on a constant restricted diet and testing whether hypercalciuria could be defined when extraneous dietary influences were controlled. These papers encompassed 300 non-stone-forming patients, 208 patients with absorptive hypercalciuria type I (presumed due to high intestinal calcium absorption), and 234 stone formers without absorptive hypercalciuria; all evaluated on a constant restricted diet. In non-stone formers, the mean urinary calcium was well below 200 mg/day, and the mean for all patients was 127±46 mg/day with an upper limit of 219 mg/day. In absorptive hypercalciuria type I, the mean urinary calcium significantly exceeded 200 mg/day in all studies with a combined mean of 259±55 mg/day. Receiver operating characteristic curve analysis showed the optimal cutoff point for urinary calcium excretion was 172 mg/day on a restricted diet, a value that approximates the traditional limit of 200 mg/day. Thus, on a restricted diet, a clear demarcation was seen between urinary calcium excretion of kidney stone formers with absorptive hypercalciuria type I and normal individuals. When dietary variables are controlled, the classic definition of hypercalciuria of nephrolithiasis appears valid. PMID:21775970
Liu Yu; Guo Qiuquan; Nie Hengyong; Lau, W. M.; Yang Jun
2009-12-15
The mechanism of dynamic force modes has been successfully applied to many atomic force microscopy (AFM) applications, such as tapping mode and phase imaging. The high-order flexural vibration modes are recent advancement of AFM dynamic force modes. AFM optical lever detection sensitivity plays a major role in dynamic force modes because it determines the accuracy in mapping surface morphology, distinguishing various tip-surface interactions, and measuring the strength of the tip-surface interactions. In this work, we have analyzed optimization and calibration of the optical lever detection sensitivity for an AFM cantilever-tip ensemble vibrating in high-order flexural modes and simultaneously experiencing a wide range and variety of tip-sample interactions. It is found that the optimal detection sensitivity depends on the vibration mode, the ratio of the force constant of tip-sample interactions to the cantilever stiffness, as well as the incident laser spot size and its location on the cantilever. It is also found that the optimal detection sensitivity is less dependent on the strength of tip-sample interactions for high-order flexural modes relative to the fundamental mode, i.e., tapping mode. When the force constant of tip-sample interactions significantly exceeds the cantilever stiffness, the optimal detection sensitivity occurs only when the laser spot locates at a certain distance from the cantilever-tip end. Thus, in addition to the 'globally optimized detection sensitivity', the 'tip optimized detection sensitivity' is also determined. Finally, we have proposed a calibration method to determine the actual AFM detection sensitivity in high-order flexural vibration modes against the static end-load sensitivity that is obtained traditionally by measuring a force-distance curve on a hard substrate in the contact mode.
NASA Astrophysics Data System (ADS)
Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang
2010-05-01
Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The
Kwak, Minjung; Jung, Sin-Ho
2014-05-30
Phase II clinical trials are often conducted to determine whether a new treatment is sufficiently promising to warrant a major controlled clinical evaluation against a standard therapy. We consider single-arm phase II clinical trials with right censored survival time responses where the ordinary one-sample logrank test is commonly used for testing the treatment efficacy. For planning such clinical trials, this paper presents two-stage designs that are optimal in the sense that the expected sample size is minimized if the new regimen has low efficacy subject to constraints of the type I and type II errors. Two-stage designs, which minimize the maximal sample size, are also determined. Optimal and minimax designs for a range of design parameters are tabulated along with examples.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
Risticevic, Sanja; DeEll, Jennifer R; Pawliszyn, Janusz
2012-08-17
Metabolomics currently represents one of the fastest growing high-throughput molecular analysis platforms that refer to the simultaneous and unbiased analysis of metabolite pools constituting a particular biological system under investigation. In response to the ever increasing interest in development of reliable methods competent with obtaining a complete and accurate metabolomic snapshot for subsequent identification, quantification and profiling studies, the purpose of the current investigation is to test the feasibility of solid phase microextraction for advanced fingerprinting of volatile and semivolatile metabolites in complex samples. In particular, the current study is focussed on the development and optimization of solid phase microextraction (SPME) - comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC × GC-ToFMS) methodology for metabolite profiling of apples (Malus × domestica Borkh.). For the first time, GC × GC attributes in terms of molecular structure-retention relationships and utilization of two-dimensional separation space on orthogonal GC × GC setup were exploited in the field of SPME method optimization for complex sample analysis. Analytical performance data were assessed in terms of method precision when commercial coatings are employed in spiked metabolite aqueous sample analysis. The optimized method consisted of the implementation of direct immersion SPME (DI-SPME) extraction mode and its application to metabolite profiling of apples, and resulted in a tentative identification of 399 metabolites and the composition of a metabolite database far more comprehensive than those obtainable with classical one-dimensional GC approaches. Considering that specific metabolome constituents were for the first time reported in the current study, a valuable approach for future advanced fingerprinting studies in the field of fruit biology is proposed. The current study also intensifies the understanding of SPME
NASA Astrophysics Data System (ADS)
Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.
2011-05-01
The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method.
... Is A Standard Drink? Drinking Levels Defined Drinking Levels Defined Moderate alcohol consumption: According to the "Dietary ... of drinking that brings blood alcohol concentration (BAC) levels to 0.08 g/dL. This typically occurs ...
NASA Astrophysics Data System (ADS)
Rohatgi, Ajeet; Hilali, Mohamed M.; Nakayashiki, Kenta
2004-04-01
High-quality screen-printed contacts were achieved on a high-sheet-resistance emitter (˜100 Ω/sq.) using PV168 Ag paste and rapid co-firing in the belt furnace. The optimized co-firing cycle developed for a 100 Ω/sq. emitter produced 16.1% efficient 4 cm2 planar edge-defined film-fed grown (EFG) ribbon Si cells with a low series-resistance (0.8 Ω cm2), high fill factor of ˜0.77, along with very significant bulk lifetime enhancement from 3 to 100 μs. This represents the highest-efficiency screen-printed EFG Si cells with single-layer antireflection (AR) coating. These cells were fabricated using a simple process involving POCl3 diffusion for a high-sheet-resistance emitter, SiNx AR coating and rapid cofiring of Ag grid and Al-doped back-surface field in a conventional belt furnace. The rapid cofiring process also prevented junction shunting while maintaining very effective SiNx-induced hydrogen passivation of defects, resulting in an average bulk lifetime exceeding 100 μs.
Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong
2012-01-01
We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193
Lucchinetti, E; Stüssi, E
2004-01-01
Measuring the elasticity constants of biological materials often sets important constraints, such as the limited size or the irregular geometry of the samples. In this paper, the identification approach as applied to the specific problem of accurately retrieving the material properties of small bone samples from a measured displacement field is discussed. The identification procedure can be formulated as an optimization problem with the goal of minimizing the difference between computed and measured displacements by searching for an appropriate set of material parameters using dedicated algorithms. Alternatively, the backcalculation of the material properties from displacement maps can be implemented using artificial neural networks. In a practical situation, however, measurement errors strongly affect the identification results, calling for robust optimization approaches in order accurately to retrieve the material properties from error-polluted sample deformation maps. Using a simple model problem, the performances of both classical and neural network driven optimization are compared. When performed before the collection of experimental data, this evaluation can be very helpful in pinpointing potential problems with the envisaged experiments such as the need for a sufficient signal-to-noise ratio, particularly important when working with small tissue samples such as specimens cut from rodent bones or single bone trabeculae.
Dorn-In, Samart; Bassitta, Rupert; Schwaiger, Karin; Bauer, Johann; Hölzel, Christina S
2015-06-01
Universal primers targeting the bacterial 16S-rRNA-gene allow quantification of the total bacterial load in variable sample types by qPCR. However, many universal primer pairs also amplify DNA of plants or even of archaea and other eukaryotic cells. By using these primers, the total bacterial load might be misevaluated, whenever samples contain high amounts of non-target DNA. Thus, this study aimed to provide primer pairs which are suitable for quantification and identification of bacterial DNA in samples such as feed, spices and sample material from digesters. For 42 primers, mismatches to the sequence of chloroplasts and mitochondria of plants were evaluated. Six primer pairs were further analyzed with regard to the question whether they anneal to DNA of archaea, animal tissue and fungi. Subsequently they were tested with sample matrix such as plants, feed, feces, soil and environmental samples. To this purpose, the target DNA in the samples was quantified by qPCR. The PCR products of plant and feed samples were further processed for the Single Strand Conformation Polymorphism method followed by sequence analysis. The sequencing results revealed that primer pair 335F/769R amplified only bacterial DNA in samples such as plants and animal feed, in which the DNA of plants prevailed.
Wójciak-Kosior, Magdalena; Szwerc, Wojciech; Strzemski, Maciej; Wichłacz, Zoltan; Sawicki, Jan; Kocjan, Ryszard; Latalski, Michał; Sowa, Ireneusz
2017-04-01
Trace analysis plays an important role in medicine for diagnosis of various disorders; however, the appropriate sample preparation is required mostly including mineralization. Although graphite furnace atomic absorption spectrometry (GF AAS) allows the investigation of biological samples such as blood, serum, and plasma without this step, it is rarely used for direct analysis because the residues of the rich organic matrix inside the furnace are difficult to remove and this may cause spectral/matrix interferences and decrease the lifetime of the graphite tube. In our work, the procedure for determination of Se, Cr, Mn, Co, Ni, Cd and Pb with the use of the high resolution continuum source GF-AAS technique in whole blood samples with minimum sample pre-treatment was elaborated. The pyrolysis and atomization temperature as well as the time of signal integration were optimized to obtain the highest intensity and repeatability of the analytical signal. Moreover, due to the apparatus modification, an additional step was added in the for graphite furnace temperature program with minimal argon flow and maximal flow of air during pyrolysis stage to increase the oxidative condition for better matrix removal. The accuracy and precision of the optimized method was verified using certified reference material (CRM) Seronorm Trace Elements Whole Blood L-1 and the developed method was applied for trace analysis of blood samples from volunteer patients of the Orthopedics Department.
NASA Astrophysics Data System (ADS)
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Seze, G.
1991-01-01
Simulated cloud/hole fields as well as Landsat imagery are used in a computer model to evaluate several proposed sampling patterns and shot management schemes for pulsed space-based Doppler lidars. Emphasis is placed on two proposed sampling strategies - one obtained from a conically scanned single telescope and the other from four fixed telescopes that are sequentially used by one laser. The question of whether there are any sampling patterns that maximize the number of resolution areas with vertical soundings to the PBL is addressed.
Parametrically defined differential equations
NASA Astrophysics Data System (ADS)
Polyanin, A. D.; Zhurov, A. I.
2017-01-01
The paper deals with nonlinear ordinary differential equations defined parametrically by two relations. It proposes techniques to reduce such equations, of the first or second order, to standard systems of ordinary differential equations. It obtains the general solution to some classes of nonlinear parametrically defined ODEs dependent on arbitrary functions. It outlines procedures for the numerical solution of the Cauchy problem for parametrically defined differential equations.
Cardozo, Manuelle C; Cavalcante, Dannuza D; Silva, Daniel L F; Santos, Walter N L Dos; Bezerra, Marcos A
2016-09-01
A method was developed for determination of total antimony in hair samples from patients undergoing chemotherapy against Leishmaniasis based on the administration of pentavalent antimonial drugs. The method is based on microwave assisted digestion of the samples in a pressurized system, reduction of Sb5+ to Sb3+ with KI solution (10% w/v) in ascorbic acid (2%, w/v) and its subsequent determination by hydride generation atomic fluorescence spectrometry (HG-AFS). The proportions of each component (HCl, HNO3 and water) used in the digestion were studied applying a constrained mixtures design. The optimal proportions found were 50% water, 25% HNO3 and 25% HCl. Variables involved in the generation of antimony hydride were optimized using a Doehlert design revealing that good sensitivity is found when using 2.0% w/v NaBH4 and 4.4 mol L-1 HCl. Under the optimum experimental conditions, the method allows the determination of antimony in hair samples with detection and quantification limits of 1.4 and 4.6 ng g-1, respectively, and precision expressed as relative standard deviation (RSD) of 2.8% (n = 10 to 10.0 mg L-1). The developed method was applied in the analysis of hair samples from patients who take medication against Leishmaniasis.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients.
Noss, Ilka; Doekes, Gert; Sander, Ingrid; Heederik, Dick J J; Thorne, Peter S; Wouters, Inge M
2010-08-01
We recently introduced a passive dust sampling method for airborne endotoxin and glucan exposure assessment-the electrostatic dustfall collector (EDC). In this study, we assessed the effects of different storage and extraction procedures on measured endotoxin and glucan levels, using 12 parallel EDC samples from 10 low exposed indoor environments. Additionally, we compared 2- and 4-week sampling with the prospect of reaching higher dust yields. Endotoxin concentrations were highest after extraction with pyrogen-free water (pf water) + Tween. Phosphate-buffered saline (PBS)-Tween yielded significantly (44%) lower levels, and practically no endotoxin was detected after extraction in pf water without Tween. Glucan levels were highest after extraction in PBS-Tween at 120 degrees C, whereas extracts made in NaOH at room temperature or 120 degrees C were completely negative. Direct extraction from the EDC cloth or sequential extraction after a preceding endotoxin extraction yielded comparable glucan levels. Sample storage at different temperatures before extraction did not affect endotoxin and glucan concentrations. Doubling the sampling duration yielded similar endotoxin and only 50% higher glucan levels. In conclusion, of the tested variables, the extraction medium was the predominant factor affecting endotoxin and glucan yields.
Ou, Chunping; St-Hilaire, André; Ouarda, Taha B M J; Conly, F Malcolm; Armstrong, Nicole; Khalil, Bahaa; Proulx-McInnis, Sandra
2012-12-01
The assessment of the adequacy of sampling locations is an important aspect in the validation of an effective and efficient water quality monitoring network. Two geostatistical approaches (e.g., kriging and Moran's I) are presented to assess multiple sampling locations. A flexible and comprehensive framework was developed for the selection of multiple sampling locations of multiple variables which was accomplished by coupling geostatistical approaches with principal component analysis (PCA) and fuzzy optimal model (FOM). The FOM was used in the integrated assessment of both multiple principal components and multiple geostatistical approaches. These integrated methods were successfully applied to the assessment of two independent water quality monitoring networks (WQMNs) of Lake Winnipeg, Canada, which respectively included 14 and 30 stations from 2006 to 2010.
Zeeman, Matthias J; Werner, Roland A; Eugster, Werner; Siegwolf, Rolf T W; Wehrle, Günther; Mohn, Joachim; Buchmann, Nina
2008-12-01
The application of (13)C/(12)C in ecosystem-scale tracer models for CO(2) in air requires accurate measurements of the mixing ratios and stable isotope ratios of CO(2). To increase measurement reliability and data intercomparability, as well as to shorten analysis times, we have improved an existing field sampling setup with portable air sampling units and developed a laboratory setup for the analysis of the delta(13)C of CO(2) in air by isotope ratio mass spectrometry (IRMS). The changes consist of (a) optimization of sample and standard gas flow paths, (b) additional software configuration, and (c) automation of liquid nitrogen refilling for the cryogenic trap. We achieved a precision better than 0.1 per thousand and an accuracy of 0.11 +/- 0.04 per thousand for the measurement of delta(13)C of CO(2) in air and unattended operation of measurement sequences up to 12 h.
Soylak, Mustafa; Tuzen, Mustafa; Souza, Anderson Santos; das Graças Andrade Korn, Maria; Ferreira, Sérgio Luis Costa
2007-10-22
The present paper describes the development of a microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry (FAAS). The optimization step was performed using a full factorial design (2(3)) involving the factors: composition of the acid mixture (CMA), microwave power (MP) and radiation time (RT). The experiments of this factorial were carried out using a certified reference material of tea GBW 07605 furnished by National Research Centre for Certified Reference Materials, China, being the metal recoveries considered as response. The relative standard deviations of the method were found below 8% for the three elements. The procedure proposed was used for the determination of copper, zinc and nickel in several samples of tea from Turkey. For 10 tea samples analyzed, the concentration achieved for copper, zinc and nickel varied at 6.4-13.1, 7.0-16.5 and 3.1-5.7 (microg g(-1)), respectively.
Farooq, Hashim; Courtier-Murias, Denis; Soong, Ronald; Masoom, Hussain; Maas, Werner; Fey, Michael; Kumar, Rajeev; Monette, Martine; Stronks, Henry; Simpson, Myrna J; Simpson, André J
2013-03-01
A method is presented that combines Carr-Purcell-Meiboom-Gill (CPMG) during acquisition with either selective or nonselective excitation to produce a considerable intensity enhancement and a simultaneous loss in chemical shift information. A range of parameters can theoretically be optimized very rapidly on the basis of the signal from the entire sample (hard excitation) or spectral subregion (soft excitation) and should prove useful for biological, environmental, and polymer samples that often exhibit highly dispersed and broad spectral profiles. To demonstrate the concept, we focus on the application of our method to T(1) determination, specifically for the slowest relaxing components in a sample, which ultimately determines the optimal recycle delay in quantitative NMR. The traditional inversion recovery (IR) pulse program is combined with a CPMG sequence during acquisition. The slowest relaxing components are selected with a shaped pulse, and then, low-power CPMG echoes are applied during acquisition with intervals shorter than chemical shift evolution (RCPMG) thus producing a single peak with an SNR commensurate with the sum of the signal integrals in the selected region. A traditional (13)C IR experiment is compared with the selective (13)C IR-RCPMG sequence and yields the same T(1) values for samples of lysozyme and riverine dissolved organic matter within error. For lysozyme, the RCPMG approach is ~70 times faster, and in the case of dissolved organic matter is over 600 times faster. This approach can be adapted for the optimization of a host of parameters where chemical shift information is not necessary, such as cross-polarization/mixing times and pulse lengths.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert
2015-06-01
Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.
Sorzano, Carlos Oscar S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar
2015-06-01
Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies.
Chen, Laiguo; Huang, Yumei; Han, Shuang; Feng, Yongbin; Jiang, Guo; Tang, Caiming; Ye, Zhixiang; Zhan, Wei; Liu, Ming; Zhang, Sukun
2013-01-25
Accurately quantifying short chain chlorinated paraffins (SCCPs) in soil samples with gas chromatograph coupled with electron capture negative ionization mass spectrometry (GC-ECNI-MS) is difficult because many other polychlorinated pollutants are present in the sample matrices. These pollutants (e.g., polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs) and toxaphene) can cause serious interferences during SCCPs analysis with GC-MS. Four main columns packed with different adsorbents, including silica gel, Florisil and alumina, were investigated in this study to determine their performance for separating interfering pollutants from SCCPs. These experimental results suggest that the optimum cleanup procedure uses a silica gel column and a multilayer silica gel-Florisil composite column. This procedure completely separated 22 PCB congeners, 23 OCPs and three toxaphene congeners from SCCPs. However, p,p'-DDD, cis-nonachlor and o,p'-DDD were not completely removed and only 53% of the total toxaphene was removed. This optimized method was successfully and effectively applied for removing interfering pollutants from real soil samples. SCCPs in 17 soil samples from different land use areas within a suburban region were analyzed with the established method. The concentrations of SCCPs in these samples were between 7 and 541 ng g(-1) (mean: 84 ng g(-1)). Similar homologue SCCPs patterns were observed between the soil samples collected from different land use areas. In addition, lower chlorinated (Cl(6/7)) C(10)- and C(11)- SCCPs were the dominant congeners.
Zuloaga, O; Etxebarria, N; Fernández, L A; Madariaga, J M
2000-08-01
The microwave-assisted extraction (MAE), accelerated solvent extraction (ASE) and Soxhlet extraction of two isomers of hexachlorocyclohexane, alpha-HCH and gamma-HCH, from a polluted landfill soil have been optimized following different experimental designs. In the case of microwave-assisted extraction, the following variables were considered: pressure, extraction time, microwave power, percentage of acetone in n-hexane mixture and solvent volume. When ASE extraction was studied the variables were pressure, temperature and extraction time. Finally, the percentage of acetone in n-hexane mixture and the extraction time were the only variables studied for Soxhlet extraction. The concentrations obtained by the three extraction techniques were, within their experimental uncertainties, in good agreement. This fact assures the possibility of using both ASE and MAE techniques in the routine determination of lindane in polluted soils and sediments.
NASA Astrophysics Data System (ADS)
Linnér, Elisabeth Schold; Morén, Max; Smed, Karl-Oskar; Nysjö, Johan; Strand, Robin
In this paper, we present LatticeLibrary, a C++ library for general processing of 2D and 3D images sampled on arbitrary lattices. The current implementation supports the Cartesian Cubic (CC), Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices, and is designed to facilitate addition of other sampling lattices. We also introduce BccFccRaycaster, a plugin for the existing volume renderer Voreen, making it possible to view CC, BCC and FCC data, using different interpolation methods, with the same application. The plugin supports nearest neighbor and trilinear interpolation at interactive frame rates. These tools will enable further studies of the possible advantages of non-Cartesian lattices in a wide range of research areas.
Zhuang, Joanna J; Zondervan, Krina; Nyberg, Fredrik; Harbron, Chris; Jawaid, Ansar; Cardon, Lon R; Barratt, Bryan J; Morris, Andrew P
2010-01-01
Genome-wide association (GWA) studies have proved extremely successful in identifying novel genetic loci contributing effects to complex human diseases. In doing so, they have highlighted the fact that many potential loci of modest effect remain undetected, partly due to the need for samples consisting of many thousands of individuals. Large-scale international initiatives, such as the Wellcome Trust Case Control Consortium, the Genetic Association Information Network, and the database of genetic and phenotypic information, aim to facilitate discovery of modest-effect genes by making genome-wide data publicly available, allowing information to be combined for the purpose of pooled analysis. In principle, disease or control samples from these studies could be used to increase the power of any GWA study via judicious use as “genetically matched controls” for other traits. Here, we present the biological motivation for the problem and the theoretical potential for expanding the control group with publicly available disease or reference samples. We demonstrate that a naïve application of this strategy can greatly inflate the false-positive error rate in the presence of population structure. As a remedy, we make use of genome-wide data and model selection techniques to identify “axes” of genetic variation which are associated with disease. These axes are then included as covariates in association analysis to correct for population structure, which can result in increases in power over standard analysis of genetic information from the samples in the original GWA study. Genet. Epidemiol. 34: 319–326, 2010. © 2010 Wiley-Liss, Inc. PMID:20088020
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Latrous El Atrache, Latifa; Ben Sghaier, Rafika; Bejaoui Kefi, Bochra; Haldys, Violette; Dachraoui, Mohamed; Tortajada, Jeanine
2013-12-15
An experimental design was applied for the optimization of extraction process of carbamates pesticides from surface water samples. Solid phase extraction (SPE) of carbamates compounds and their determination by liquid chromatography coupled to electrospray mass spectrometry detector were considered. A two level full factorial design 2(k) was used for selecting the variables which affected the extraction procedure. Eluent and sample volumes were statistically the most significant parameters. These significant variables were optimized using Doehlert matrix. The developed SPE method included 200mg of C-18 sorbent, 143.5 mL of water sample and 5.5 mL of acetonitrile in the elution step. For validation of the technique, accuracy, precision, detection and quantification limits, linearity, sensibility and selectivity were evaluated. Extraction recovery percentages of all the carbamates were above 90% with relative standard deviations (R.S.D.) in the range of 3-11%. The extraction method was selective and the detection and quantification limits were between 0.1 and 0.5 µg L(-1), and 1 and 3 µg L(-1), respectively.
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-15
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji
2016-08-01
In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine.
Sarrut, Morgan; Rouvière, Florent; Heinisch, Sabine
2017-01-23
This study was devoted to the search for conditions leading to highly efficient sub-hour separations of complex peptide samples with the objective of coupling to mass spectrometry. In this context, conditions for one dimensional reversed phase liquid chromatography (1D-RPLC) were optimized on the basis of a kinetic approach while conditions for on-line comprehensive two-dimensional liquid chromatography using reversed phase in both dimensions (on-line RPLCxRPLC) were optimized on the basis of a Pareto-optimal approach. Maximizing the peak capacity while minimizing the dilution factor for different analysis times (down to 5min) were the two objectives under consideration. For gradient times between 5 and 60min, 15cm was found to be the best column length in RPLC with sub-2μm particles under 800bar as system pressure. In RPLCxRPLC, for less than one hour as first dimension gradient time, the sampling rate was found to be a key parameter in addition to conventional parameters including column dimension, particle size, flow-rate and gradient conditions in both dimensions. It was shown that the optimum sampling rate was as low as one fraction per peak for very short gradient times (i.e. below 10min). The quality descriptors obtained under optimized RPLCxRPLC conditions were compared to those obtained under optimized RPLC conditions. Our experimental results for peptides, obtained with state of the art instrumentation, showed that RPLCxRPLC could outperform 1D-RPLC for gradient times longer than 5min. In 60min, the same peak intensity (same dilution) was observed with both techniques but with a 3-fold lower injected amount in RPLCxRPLC. A significant increase of the signal-to-noise ratio mainly due to a strong noise reduction was observed in RPLCxRPLC-MS compared to the one in 1D-RPLC-MS making RPLCxRPLC-MS a promising technique for peptide identification in complex matrices.
NASA Astrophysics Data System (ADS)
Metzger, Stefan; Burba, George; Burns, Sean P.; Blanken, Peter D.; Li, Jiahong; Luo, Hongyan; Zulueta, Rommel C.
2016-03-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure’s accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children’s mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673
NASA Astrophysics Data System (ADS)
Liu, Junwen; Li, Jun; Ding, Ping; Zhang, Yanlin; Liu, Di; Shen, Chengde; Zhang, Gan
2017-04-01
Radiocarbon (14C) analysis is a unique tool that can be used to directly apportion organic carbon (OC) and elemental carbon (EC) into fossil and non-fossil fractions. In this study, a coupled carbon analyzer and high-vacuum setup was established to collect atmospheric OC and EC. We thoroughly investigated the correlations between 14C levels and mass recoveries of OC and EC using urban PM2.5 samples collected from a city in central China and found that: (1) the 14C signal of the OC fraction collected in the helium phase of the EUSSAR_2 protocol (200 °C for 120 s, 300 °C for 150 s, 450 °C for 180 s, and 650 °C for 180 s) was representative of the entire OC fraction, with a relative error of approximately 6%, and (2) after thermal treatments of 120 s at 200 °C, 150 s at 300 °C, and 180 s at 475 °C in an oxidative atmosphere (10% oxygen, 90% helium) and 180 s at 650 °C in helium, the remaining EC fraction sufficiently represented the 14C level of the entire EC, with a relative error of <10%. The average recovery of the OC and EC fractions for 14C analysis was 64± 7% (n = 5) and 87 ± 5% (n = 5), respectively. The fraction of modern carbon in the OC and EC of reference material (RM) 8785 was 0.564 ± 0.013 and 0.238 ± 0.006, respectively. Analysis of 14C levels in four selected PM2.5 samples in Xinxiang, China revealed that the relative contribution of fossil sources in OC and EC in the PM2.5 samples were 50.5± 5.8% and 81.4± 2.6%, respectively, which are comparable to findings in previous studies conducted in other Chinese cities. We confirmed that most urban EC derives from fossil fuel combustion processes, whereas both fossil and non-fossil sources have comparable and important impacts on OC. Our results suggested that water-soluble organic carbon (WSOC) and its pyrolytic carbon can be completely removed before EC collection via the method employed in this study.
Optimization of a gas sampling system for measuring eddy-covariance fluxes of H2O and CO2
NASA Astrophysics Data System (ADS)
Metzger, S.; Burba, G.; Burns, S. P.; Blanken, P. D.; Li, J.; Luo, H.; Zulueta, R. C.
2015-10-01
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) will provide the ability of unbiased ecological inference across eco-climatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analysers are widely employed for eddy-covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties, and requires correction. Here, we show that the gas sampling system substantially contributes to high-frequency attenuation, which can be minimized by careful design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyser cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh
Ursell, Luke K; Metcalf, Jessica L; Parfrey, Laura Wegener; Knight, Rob
2012-01-01
Rapidly developing sequencing methods and analytical techniques are enhancing our ability to understand the human microbiome, and, indeed, how we define the microbiome and its constituents. In this review we highlight recent research that expands our ability to understand the human microbiome on different spatial and temporal scales, including daily timeseries datasets spanning months. Furthermore, we discuss emerging concepts related to defining operational taxonomic units, diversity indices, core versus transient microbiomes and the possibility of enterotypes. Additional advances in sequencing technology and in our understanding of the microbiome will provide exciting prospects for exploiting the microbiota for personalized medicine. PMID:22861806
Chai, Xutian; Dong, Rui; Liu, Wenxian; Wang, Yanrong; Liu, Zhipeng
2017-03-31
Common vetch (Vicia sativa subsp. sativa L.) is a self-pollinating annual forage legume with worldwide importance. Here, we investigate the optimal number of individuals that may represent the genetic diversity of a single population, using Start Codon Targeted (SCoT) markers. Two cultivated varieties and two wild accessions were evaluated using five SCoT primers, also testing different sampling sizes: 1, 2, 3, 5, 8, 10, 20, 30, 40, 50, and 60 individuals. The results showed that the number of alleles and the Polymorphism Information Content (PIC) were different among the four accessions. Cluster analysis by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) and STRUCTURE placed the 240 individuals into four distinct clusters. The Expected Heterozygosity (HE) and PIC increased along with an increase in sampling size from 1 to 10 plants but did not change significantly when the sample sizes exceeded 10 individuals. At least 90% of the genetic variation in the four germplasms was represented when the sample size was 10. Finally, we concluded that 10 individuals could effectively represent the genetic diversity of one vetch population based on the SCoT markers. This study provides theoretical support for genetic diversity, cultivar identification, evolution, and marker-assisted selection breeding in common vetch.
Wüst, Thomas; Landau, David P
2012-08-14
Coarse-grained (lattice-) models have a long tradition in aiding efforts to decipher the physical or biological complexity of proteins. Despite the simplicity of these models, however, numerical simulations are often computationally very demanding and the quest for efficient algorithms is as old as the models themselves. Expanding on our previous work [T. Wüst and D. P. Landau, Phys. Rev. Lett. 102, 178101 (2009)], we present a complete picture of a Monte Carlo method based on Wang-Landau sampling in combination with efficient trial moves (pull, bond-rebridging, and pivot moves) which is particularly suited to the study of models such as the hydrophobic-polar (HP) lattice model of protein folding. With this generic and fully blind Monte Carlo procedure, all currently known putative ground states for the most difficult benchmark HP sequences could be found. For most sequences we could also determine the entire energy density of states and, together with suitably designed structural observables, explore the thermodynamics and intricate folding behavior in the virtually inaccessible low-temperature regime. We analyze the differences between random and protein-like heteropolymers for sequence lengths up to 500 residues. Our approach is powerful both in terms of robustness and speed, yet flexible and simple enough for the study of many related problems in protein folding.
NASA Astrophysics Data System (ADS)
Wüst, Thomas; Landau, David P.
2012-08-01
Coarse-grained (lattice-) models have a long tradition in aiding efforts to decipher the physical or biological complexity of proteins. Despite the simplicity of these models, however, numerical simulations are often computationally very demanding and the quest for efficient algorithms is as old as the models themselves. Expanding on our previous work [T. Wüst and D. P. Landau, Phys. Rev. Lett. 102, 178101 (2009)], 10.1103/PhysRevLett.102.178101, we present a complete picture of a Monte Carlo method based on Wang-Landau sampling in combination with efficient trial moves (pull, bond-rebridging, and pivot moves) which is particularly suited to the study of models such as the hydrophobic-polar (HP) lattice model of protein folding. With this generic and fully blind Monte Carlo procedure, all currently known putative ground states for the most difficult benchmark HP sequences could be found. For most sequences we could also determine the entire energy density of states and, together with suitably designed structural observables, explore the thermodynamics and intricate folding behavior in the virtually inaccessible low-temperature regime. We analyze the differences between random and protein-like heteropolymers for sequence lengths up to 500 residues. Our approach is powerful both in terms of robustness and speed, yet flexible and simple enough for the study of many related problems in protein folding.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.
Defining Mathematical Giftedness
ERIC Educational Resources Information Center
Parish, Linda
2014-01-01
This theoretical paper outlines the process of defining "mathematical giftedness" for a present study on how primary school teaching shapes the mindsets of children who are mathematically gifted. Mathematical giftedness is not a badge of honour or some special value attributed to a child who has achieved something exceptional.…
ERIC Educational Resources Information Center
Arriola, Sonya; Murphy, Katy
2010-01-01
Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…
ERIC Educational Resources Information Center
Barth, Patte, Ed.
1994-01-01
This issue of "Basic Education" presents articles that discuss, respectively, defining the language arts, an agenda for English, the benefits of two languages, a new teacher (presently teaching English in a foreign country) looking ahead, and the Shaker Fellowships awarded by the school district in Shaker Heights, Ohio. Articles in the…
ERIC Educational Resources Information Center
Hecht, Eugene
2011-01-01
Though central to any pedagogical development of physics, the concept of mass is still not well understood. Properly defining mass has proven to be far more daunting than contemporary textbooks would have us believe. And yet today the origin of mass is one of the most aggressively pursued areas of research in all of physics. Much of the excitement…
Transition Coordinators: Define Yourselves.
ERIC Educational Resources Information Center
Asselin, Susan B.; Todd-Allen, Mary; deFur, Sharon
1998-01-01
Describes a technique that was used successfully to identify the changing roles and responsibilities of special educators as transition coordinators. The Developing a Curriculum (DACUM) model uses people who are currently working in the occupation to define job responsibilities. The duties of a transition coordinator are identified. (CR)
Defining in Classroom Activities.
ERIC Educational Resources Information Center
Mariotti, Maria Alessandra; Fischbein, Efraim
1997-01-01
Discusses some aspects of the defining process in geometrical context in the reference frame of the theory of "figural concepts." Presents analysis of some examples taken from a teaching experiment at the sixth-grade level. Contains 30 references. (Author/ASK)
SU-C-207-03: Optimization of a Collimator-Based Sparse Sampling Technique for Low-Dose Cone-Beam CT
Lee, T; Cho, S; Kim, I; Han, B
2015-06-15
Purpose: In computed tomography (CT) imaging, radiation dose delivered to the patient is one of the major concerns. Sparse-view CT takes projections at sparser view angles and provides a viable option to reducing dose. However, a fast power switching of an X-ray tube, which is needed for the sparse-view sampling, can be challenging in many CT systems. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. In this study, we investigated the effects of collimator parameters on the image quality and aimed to optimize the collimator design. Methods: We used a bench-top circular cone-beam CT system together with a CatPhan600 phantom, and took 1440 projections from a single rotation. The multi-slit collimator made of tungsten was mounted on the X-ray source for beam blocking. For image reconstruction, we used a total-variation minimization (TV) algorithm and modified the backprojection step so that only the measured data through the collimator slits are to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. We also analyzed the sampling efficiency: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/ro. Results: Consistent results in the image quality have been produced with the sampling efficiency, and the optimum condition was found to be using 12 slits at 30 Hz/ro. As image quality indices, we used the CNR and the detectability. Conclusion: We conducted an experiment with a moving multi-slit collimator to realize a sparse-sampled cone-beam CT. Effects of collimator parameters on the image quality have been systematically investigated, and the optimum condition has been reached.
Hu, Lingzhi; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Traughber, Melanie; Muzic, Raymond F.
2014-01-01
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2∗ = 1/T2∗, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2∗ of human skull was measured as 0.2–0.3 ms−1 depending on the specific region, which is more than ten times greater than the R2∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone
Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu
2014-10-15
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in
Chung, Wu-Hsun; Tzing, Shin-Hwa; Ding, Wang-Hsien
2015-09-11
A solvent-free method for the rapid analysis of six benzophenone-type UV absorbers in water samples is described. The method involves the use of dispersive micro solid-phase extraction (DmSPE) followed by the simultaneous silylation and thermal desorption (SSTD) gas chromatography-mass spectrometry (GC-MS) operating in the selected-ion-storage (SIS) mode. A Plackett-Burman design was used for screening and a central composite design (CCD) for optimizing the significant factors was applied. The optimal experimental conditions involved immersing 1.5mg of the Oasis HLB adsorbent in a 10mL portion of water sample. After vigorous shaking for 1min, the adsorbents were transferred to a micro-vial, and were dried at 122°C for 3.5min, after cooling, 2μL of the BSTFA silylating reagent was added. For SSTD, the injection-port temperature was held at 70°C for 2.5min for derivatization, and the temperature was then rapidly increased to 340°C to allow the thermal desorption of the TMS-derivatives into the GC for 5.7min. The limits of quantitation (LOQs) were determined to be 1.5-5.0ng/L. Precision, as indicated by relative standard deviations (RSDs), was equal or less than 11% for both intra- and inter-day analysis. Accuracy, expressed as the mean extraction recovery, was between 87% and 95%. A preliminary analysis of the municipal wastewater treatment plant (MWTP) effluent and river water samples revealed that 2-hydroxy-4-methoxybenzophenone (BP-3) was the most common benzophenone-type UV absorber present. Using a standard addition method, the total concentrations of these compounds ranged from 5.1 to 74.8ng/L.
Ferreirós, N; Iriarte, G; Alonso, R M; Jiménez, R M
2006-05-15
A chemometric approach was applied for the optimization of the extraction and separation of the antihypertensive drug eprosartan from human plasma samples. MultiSimplex program was used to optimize the HPLC-UV method due to the number of experimental and response variables to be studied. The measured responses were the corrected area, the separation of eprosartan chromatographic peak from plasma interferences peaks and the retention time of the analyte. The use of an Atlantis dC18, 100mmx3.9mm i.d. chromatographic column with a 0.026% trifluoroacetic acid (TFA) in the organic phase and 0.031% TFA in the aqueous phase, an initial composition of 80% aqueous phase in the mobile phase, a stepness of acetonitrile of 3% during the gradient elution mode with a flow rate of 1.25mL/min and a column temperature of 35+/-0.2 degrees C allowed the separation of eprosartan and irbesartan used as internal standard from plasma endogenous compounds. In the solid phase extraction procedure, experimental design was used in order to achieve a maximum recovery percentage. Firstly, the significant variables were chosen by way of fractional factorial design; then, a central composite design was run to obtain the more adequate values of the significant variables. Thus, the extraction procedure for spiked human plasma samples was carried out using C8 cartridges, phosphate buffer pH 2 as conditioning agent, a drying step of 10min, a washing step with methanol-phosphate buffer (20:80, v/v) and methanol as eluent liquid. The SPE-HPLC-UV developed method allowed the separation and quantitation of eprosartan from human plasma samples with an adequate resolution and a total analysis time of 1h.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Defining Dynamic Route Structure
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Jastrzebski, Michael
2011-01-01
This poster describes a method for defining route structure from flight tracks. Dynamically generated route structures could be useful in guiding dynamic airspace configuration and helping controllers retain situational awareness under dynamically changing traffic conditions. Individual merge and diverge intersections between pairs of flights are identified, clustered, and grouped into nodes of a route structure network. Links are placed between nodes to represent major traffic flows. A parametric analysis determined the algorithm input parameters producing route structures of current day flight plans that are closest to todays airway structure. These parameters are then used to define and analyze the dynamic route structure over the course of a day for current day flight paths. Route structures are also compared between current day flight paths and more user preferred paths such as great circle and weather avoidance routing.
Defining and Diagnosing Sepsis.
Scott, Michael C
2017-02-01
Sepsis is a heterogeneous clinical syndrome that encompasses infections of many different types and severity. Not surprisingly, it has confounded most attempts to apply a single definition, which has also limited the ability to develop a set of reliable diagnostic criteria. It is perhaps best defined as the different clinical syndromes produced by an immune response to infection that causes harm to the body beyond that of the local effects of the infection.
Li, Jincheng; Zhang, Jing; Liu, Yang
2015-08-01
This paper describes a rapid and sensitive method for the determination of eugenol in fish samples, based on solid-phase extraction (SPE) and gas chromatography-tandem mass spectrometry (GC-MS-MS). Samples were extracted with acetonitrile, and then cleanup was performed using C18 solid-phase extraction (SPE). The determination of eugenol was achieved using an electron-ionization source (EI) in multiple-reaction-monitoring (MRM) mode. Under optimized conditions, the average recoveries of eugenol were in the range 94.85-103.61 % and the relative standard deviation (RSD) was lower than 12.0 %. The limit of detection (LOD) was 2.5 μg kg(-1) and the limit of quantification (LOQ) was 5.0 μg kg(-1). This method was applied to an exposure study of eugenol residue in carp muscle tissues. The results revealed that eugenol was nearly totally eliminated within 96 h. Graphical Abstract Flow diagram for sample pretreatment.
Sadiq, Nausheen; Beauchemin, Diane
2014-12-03
Two different approaches were used to improve the capabilities of solid sampling (SS) electrothermal vaporization (ETV) coupled to inductively coupled plasma optical emission spectrometry (ICP-OES) for the direct analysis of powdered rice. Firstly, a cooling step immediately before and after the vaporization step in the ETV temperature program resulted in a much sharper analyte signal peak. Secondly, point-by-point internal standardization with an Ar emission line significantly improved the linearity of calibration curves obtained with an increasing amount of rice flour certified reference material (CRM). Under the optimized conditions, detection limits ranged from 0.01 to 6ngg(-1) in the solid, depending on the element and wavelength selected. The method was validated through the quantitative analysis of corn bran and wheat flour CRMs. Application of the method to the multi-elemental analysis of 4-mg aliquots of real organic long grain rice (white and brown) also gave results for Al, As, Co, Cu, Fe, Mg, Se, Pb and Zn in agreement with those obtained by inductively coupled plasma mass spectrometry following acid digestion of 0.2-g aliquots. As the analysis takes roughly 5min per sample (2.5min for grinding, 0.5-1min for weighing a 4-mg aliquot and 87s for the ETV program), this approach shows great promise for fast screening of food samples.
Pascali, Jennifer P; Liotta, Eloisa; Gottardo, Rossella; Bortolotti, Federica; Tagliaro, Franco
2009-04-10
After decades of neglect, bromide has recently been re-introduced in therapy as an effective anti-epileptic drug. The present paper describes the methodological optimization and validation of a method based on capillary zone electrophoresis for the rapid determination of bromide in serum using a high-viscosity buffer and a short capillary (10 cm). The optimized running buffer was composed of 90 mM sodium tetraborate, 10mM sodium chloride, pH 9.24 and 25% glycerol. The separation was carried out at 25 kV at a temperature of 20 degrees C. Detection was by direct UV absorption at 200 nm wavelength. The limit of detection (signal-to-noise ratio=5) in serum was 0.017 mM. The precision of the method was verified in blank serum samples spiked with bromide, obtaining intra-day and day-to-day tests, relative standard deviation values
Defining functional dyspepsia.
Mearin, Fermín; Calleja, José Luis
2011-12-01
Dyspepsia and functional dyspepsia represent a highly significant public health issue. A good definition of dyspepsia is key for helping us to better approach symptoms, decision making, and therapy indications.During the last few years many attempts were made at establishing a definition of dyspepsia. Results were little successful on most occasions, and clear discrepancies arose on whether symptoms should be associated with digestion, which types of symptoms were to be included, which anatomic location should symptoms have, etc.The Rome III Committee defined dyspepsia as "a symptom or set of symptoms that most physicians consider to originate from the gastroduodenal area", including the following: postprandial heaviness, early satiety, and epigastric pain or burning. Two new entities were defined: a) food-induced dyspeptic symptoms (postprandial distress syndrome); and b) epigastric pain (epigastric pain syndrome). These and other definitions have shown both strengths and weaknesses. At times they have been much too complex, at times much too simple; furthermore, they have commonly erred on the side of being inaccurate and impractical. On the other hand, some (the most recent ones) are difficult to translate into the Spanish language. In a meeting of gastroenterologists with a special interest in digestive functional disorders, the various aspects of dyspepsia definition were discussed and put to the vote, and the following conclusions were arrived at: dyspepsia is defined as a set of symptoms, either related or unrelated to food ingestion, localized on the upper half of the abdomen. They include: a) epigastric discomfort (as a category of severity) or pain; b) postprandial heaviness; and c) early satiety. Associated complaints include: nausea, belching, bloating, and epigastric burn (heartburn). All these must be scored according to severity and frequency. Furthermore, psychological factors may be involved in the origin of functional dyspepsia. On the other hand
2015-01-01
Assessment of the periodontium has relied exclusively on a variety of physical measurements (e.g., attachment level, probing depth, bone loss, mobility, recession, degree of inflammation, etc.) in relation to various case definitions of periodontal disease. Periodontal health was often an afterthought and was simply defined as the absence of the signs and symptoms of a periodontal disease. Accordingly, these strict and sometimes disparate definitions of periodontal disease have resulted in an idealistic requirement of a pristine periodontium for periodontal health, which makes us all diseased in one way or another. Furthermore, the consequence of not having a realistic definition of health has resulted in potentially questionable recommendations. The aim of this manuscript was to assess the biological, environmental, sociological, economic, educational and psychological relationships that are germane to constructing a paradigm that defines periodontal health using a modified wellness model. The paradigm includes four cardinal characteristics, i.e., 1) a functional dentition, 2) the painless function of a dentition, 3) the stability of the periodontal attachment apparatus, and 4) the psychological and social well-being of the individual. Finally, strategies and policies that advocate periodontal health were appraised. I'm not sick but I'm not well, and it's a sin to live so well. Flagpole Sitta, Harvey Danger PMID:26390888
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design.
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
Fabrino, Henrique José Ferraz; Silveira, Josianne Nicácio; Neto, Waldomiro Borges; Goes, Alfredo Miranda; Beinner, Mark Anthony; da Silva, José Bento Borba
2011-10-01
A method for direct determination of manganese (Mn) in human serum by graphite furnace atomic absorption spectrometry (GFAAS) was proposed in this work. The samples were only diluted 1:4 with nitric acid 1% (v/v) and Triton(®) X-100 0.1% (v/v). The optimization of the instrumental conditions was made using multivariate approach. A factorial design (2(3)) was employed to investigate the tendency of the most intense absorbance signal. The pyrolysis and atomization temperatures and the use of modifier were available and only the parameter modifier use did not have a significant effect on the response. A Center Composed Design (CCD) presented best temperatures of 430 °C and 2568 °C for pyrolysis and atomization, respectively. The method allowed the determination of manganese with a curve varying from 0.7 to 3.3 μg/L. Recovery studies in three concentration levels (n=7 for each level) presented results from 98 ± 5 to 102 ± 7 %. The detection limit was 0.2 μg/L, the quantifying limit was 0.7 μg/L, and the characteristic mass, 1.3 ± 0.2 pg. Intra- and interassay studies showed coefficients of variation of 4.7-7.0% (n=21) and 6-8%(n=63), respectively. The method was applied for the determination of manganese in 53 samples obtaining concentrations from 3.9 to 13.7 μg/L.
Gumustas, Mehmet; Sengel-Turk, Ceyda Tuba; Hascicek, Canan; Ozkan, Sibel A
2014-10-01
Fulvestrant is used for the treatment of hormone receptor-positive metastatic breast cancer in postmenopausal women with disease progression following anti-estrogen therapy. Several reversed-phase columns with variable silica materials, diameters, lengths, etc., were tested for the optimization study. A good chromatographic separation was achieved using a Waters X-Terra RP(18) column (250 × 4.6 mm i.d. × 5 µm) and a mobile phase, consisting of a mixture of acetonitrile-water (65:35; v/v) containing phosphoric acid (0.1%). The separation was carried out 40 °C with detection at 215 nm.The calibration curves were linear over the concentration range between 1.0-300 and 1.0-200 µg/mL for standard solutions and biological media, respectively. The proposed method is accurate and reproducible. Forced degradation studies were also realized. This fully validated method allows the direct determination of fulvestrant in dosage form and biological samples. The average recovery of the added fulvestrant amount in the samples was between 98.22 and 104.03%. The proposed method was also applied for the determination of fulvestrant from the polymeric-based nanoparticle systems. No interference from using polymers and other excipients was observed in in vitro drug release studies. Therefore an incorporation efficiency of fulvestrant-loaded nanoparticle could be determined accurately and specifically.
Feng, Biting; Gan, Zhiwei; Hu, Hongwei; Sun, Hongwen
2014-09-01
The sample pretreatment method for the determination of four typical artificial sweeteners (ASs) including sucralose, saccharin, cyclamate, and acesulfame in soil by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was optimized. Different conditions of extraction, including four extractants (methanol, acetonitrile, acetone, deionized water), three kinds of ionic strength of sodium acetate solution (0.001, 0.01, 0.1 mol/L), four pH values (3, 4, 5 and 6) of 0.01 mol/L acetate-sodium acetate solution, four set durations of extraction (20, 40, 60, 120 min) and number of extraction times (1, 2, 3, 4 times) were compared. The optimal sample pretreatment method was finally set up. The sam- ples were extracted twice with 25 mL 0.01 mol/L sodium acetate solution (pH 4) for 20 min per cycle. The extracts were combined and then purified and concentrated by CNW Poly-Sery PWAX cartridges with methanol containing 1 mmol/L tris (hydroxymethyl) amino methane (Tris) and 5% (v/v) ammonia hydroxide as eluent. The analytes were determined by HPLC-MS/MS. The recoveries were obtained by spiked soil with the four artificial sweeteners at 1, 10, 100 μg/kg (dry weight), separately. The average recoveries of the analytes ranged from 86.5% to 105%. The intra-day and inter-day precisions expressed as relative standard deviations (RSDs) were in the range of 2.56%-5.94% and 3.99%-6.53%, respectively. Good linearities (r2 > 0.995) were observed between 1-100 μg/kg (dry weight) for all the compounds. The limits of detection were 0.01-0.21 kg/kg and the limits of quantification were 0.03-0.70 μg/kg for the analytes. The four artificial sweeteners were determined in soil samples from farmland contaminated by wastewater in Tianjin. This method is rapid, reliable, and suitable for the investigation of artificial sweeteners in soil.
Zabel, Robert; Kullmann, Maximilian; Kalayda, Ganna V; Jaehde, Ulrich; Weber, Günther
2015-02-01
Pt-based anticancer drugs, such as cisplatin, are known to undergo several (bio-)chemical transformation steps after administration. Hydrolysis and adduct formation with small nucleophiles and larger proteins are their most relevant reactions on the way to the final reaction site (DNA), but there are still many open questions regarding the identity and pharmacological relevance of various proposed adducts and intermediates. Furthermore, the role of buffer components or additives, which are inevitably added to samples during any type of analytical measurement, has been frequently neglected in previous studies. Here, we report on adduct formation reactions of the fluorescent cisplatin analogue carboxyfluorescein diacetate platinum (CFDA-Pt) in commonly used buffers and cell culture medium. Our results indicate that chelation reactions with noninnocent buffers (e.g., Tris) and components of the cell culture/cell lysis medium must be taken into account when interpreting results. Adduct formation kinetics was followed up to 60 h at nanomolar concentrations of CFDA-Pt by using CE-LIF. CE-MS enabled the online identification of such unexpected adducts down to the nanomolar concentration range. By using an optimized sample preparation strategy, unwanted adducts can be avoided and several fluorescent adducts of CFDA-Pt are detectable in sensitive and cisplatin-resistant cancer cell lines. By processing samples rapidly after incubation, we could even identify the initial, but transient, Pt species in the cells as deacetylated CFDA-Pt with unaltered complexing environment at Pt. Overall, the proposed procedure enables a very sensitive and accurate analysis of low molecular mass Pt species in cancer cells, involving a fast CE-LIF detection within 5 min.
Rajabi, M; Kamalabadi, M; Jamali, M R; Zolgharnein, J; Asanjarani, N
2013-06-01
A new, rapid, and simple method for the determination of cadmium in water samples was developed using ionic liquid-based dispersive liquid-liquid microextraction (IL-DLLME) coupled to flame atomic absorption spectrometry (FAAS). In the proposed approach, 2-(5-boromo-2-pyridylazo)-5-(diethyamino) phenol was used as a chelating agent and 1-hexyl-3-methylimidazolium bis(trifluoro methylsulfonyl)imide and acetone were selected as extraction and dispersive solvents, respectively. Sample pH, concentration of chelating agent, amount of ionic liquid (extraction solvent), disperser solvent volume, extraction time, salt effect, and centrifugation speed were selected as interested variables in IL-DLLME process. The significant variables affecting the extraction efficiency were determined using a Placket-Burman design. Thereafter, the significant variables were optimized using a Box-Behnken design and the quadratic model between the dependent and the independent variables was built. The optimum experimental conditions obtained from this statistical evaluation included: pH: 6.7; concentration of chelating agent: 1.1 10(-) (3) mol L(-1); and ionic liquid: 50.0 mg. Under the optimum conditions, the preconcentration factor obtained was 100. Calibration graph was linear in the range of 0.2-60 µg L(-1) with correlation coefficient of 0.9992. The limit of detection was 0.06 µg L(-) (1), which is lower than other reported approaches applied to the determination of cadmium using FAAS. The relative SD (n = 8) was 2.4%. The proposed method was successfully applied to the determination of trace amounts of cadmium in the real water samples with satisfactory results.
NASA Astrophysics Data System (ADS)
Lewis, Simon; Maslin, Mark
2016-04-01
Time is divided by geologists according to marked shifts in Earth's state. Recent global environmental changes suggest that Earth may have entered a new human-dominated geological epoch, the Anthropocene. Should the Anthropocene - the idea that human activity is a force acting upon the Earth system in ways that mean that Earth will be altered for millions of years - be defined as a geological time-unit at the level of an Epoch? Here we appraise the data to assess such claims, first in terms of changes to the Earth system, with particular focus on very long-lived impacts, as Epochs typically last millions of years. Can Earth really be said to be in transition from one state to another? Secondly, we then consider the formal criteria used to define geological time-units and move forward through time examining whether currently available evidence passes typical geological time-unit evidence thresholds. We suggest two time periods likely fit the criteria (1) the aftermath of the interlinking of the Old and New Worlds, which moved species across continents and ocean basins worldwide, a geologically unprecedented and permanent change, which is also the globally synchronous coolest part of the Little Ice Age (in Earth system terms), and the beginning of global trade and a new socio-economic "world system" (in historical terms), marked as a golden spike by a temporary drop in atmospheric CO2, centred on 1610 CE; and (2) the aftermath of the Second World War, when many global environmental changes accelerated and novel long-lived materials were increasingly manufactured, known as the Great Acceleration (in Earth system terms) and the beginning of the Cold War (in historical terms), marked as a golden spike by the peak in radionuclide fallout in 1964. We finish by noting that the Anthropocene debate is politically loaded, thus transparency in the presentation of evidence is essential if a formal definition of the Anthropocene is to avoid becoming a debate about bias. The
Braveman, P; Gruskin, S
2003-01-01
Study objective: To propose a definition of health equity to guide operationalisation and measurement, and to discuss the practical importance of clarity in defining this concept. Design: Conceptual discussion. Setting, Patients/Participants, and Main results: not applicable. Conclusions: For the purposes of measurement and operationalisation, equity in health is the absence of systematic disparities in health (or in the major social determinants of health) between groups with different levels of underlying social advantage/disadvantage—that is, wealth, power, or prestige. Inequities in health systematically put groups of people who are already socially disadvantaged (for example, by virtue of being poor, female, and/or members of a disenfranchised racial, ethnic, or religious group) at further disadvantage with respect to their health; health is essential to wellbeing and to overcoming other effects of social disadvantage. Equity is an ethical principle; it also is consonant with and closely related to human rights principles. The proposed definition of equity supports operationalisation of the right to the highest attainable standard of health as indicated by the health status of the most socially advantaged group. Assessing health equity requires comparing health and its social determinants between more and less advantaged social groups. These comparisons are essential to assess whether national and international policies are leading toward or away from greater social justice in health. PMID:12646539
Wynn, James L.
2016-01-01
Purpose of the review Although infection rates have modestly decreased in the neonatal intensive care unit (NICU) as a result of ongoing quality improvement measures, neonatal sepsis remains a frequent and devastating problem among hospitalized preterm neonates. Despite multiple attempts to address this unmet need, there have been minimal advances in clinical management, outcomes, and accuracy of diagnostic testing options over the last three decades. One strong contributor to a lack of medical progress is a variable case definition of disease. The inability to agree on a precise definition greatly reduces the likelihood of aligning findings from epidemiologists, clinicians, and researchers, which, in turn, severely hinders progress towards improving outcomes. Recent findings Pediatric consensus definitions for sepsis are not accurate in term infants and are not appropriate for preterm infants. In contrast to the defined multi-stage criteria for other devastating diseases encountered in the NICU (e.g., bronchopulmonary dysplasia), there is significant variability in the criteria used by investigators to substantiate the diagnosis of neonatal sepsis. Summary The lack of an accepted consensus definition for neonatal sepsis impedes our efforts towards improved diagnostic and prognostic options as well as accurate outcomes information for this vulnerable population. PMID:26766602
Wang, Lianzhu; Zhou, Yu; Huang, Xiaoyan; Wang, Ruilong; Lin, Zixu; Chen, Yong; Wang, Dengfei; Lin, Dejuan; Xu, Dunming
2013-12-01
The raw extracts of six vegetables (tomato, green bean, shallot, broccoli, ginger and carrot) were analyzed using gas chromatography-mass spectrometry (GC-MS) in full scan mode combined with NIST library search to confirm main matrix compounds. The effects of cleanup and adsorption mechanisms of primary secondary amine (PSA) , octadecylsilane (C18) and PSA + C18 on co-extractives were studied by the weight of evaporation residue for extracts before and after cleanup. The suitability of the two versions of QuEChERS method for sample preparation was evaluated for the extraction of 51 carbamate pesticides in the six vegetables. One of the QuEChERS methods was the original un-buffered method published in 2003, and the other was AOAC Official Method 2007.01 using acetate buffer. As a result, the best effects were obtained from using the combination of C18 and PSA for extract cleanup in vegetables. The acetate-buffered version was suitable for the determination of all pesticides except dioxacarb. Un-buffered QuEChERS method gave satisfactory results for determining dioxacarb. Based on these results, the suitable QuEChERS sample preparation method and liquid chromatography-positive electrospray ionization-tandem mass spectrometry under the optimized conditions were applied to determine the 51 carbamate pesticide residues in six vegetables. The analytes were quantified by matrix-matched standard solution. The recoveries at three levels of 10, 20 and 100 microg/kg spiked in six vegetables ranged from 58.4% to 126% with the relative standard deviations of 3.3%-26%. The limits of quantification (LOQ, S/N > or = 10) were 0.2-10 microg/kg except that the LOQs of cartap and thiofanox were 50 microg/kg. The method is highly efficient, sensitive and suitable for monitoring the 51 carbamate pesticide residues in vegetables.
Siqueira, Glécio Machado; Dafonte, Jorge Dafonte; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; Silva, Ênio Farias França e
2014-01-01
This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from −0.36 to −0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging. PMID:25610899
Machado Siqueira, Glécio; Dafonte Dafonte, Jorge; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; França e Silva, Ênio Farias
2014-01-01
This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from -0.36 to -0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging.
Colombier, J. P.; Audouard, E.; Stoian, R.; Combis, P.
2006-12-01
We present results describing the efficiency of energy coupling in laser-irradiated metallic surfaces by ultrashort laser pulses with different intensity envelopes. Subsequently, we discuss probable thermodynamic paths for material ejection under the laser action. Ion and neutral emission from the excited sample is used as a sensitive method to probe the efficiency of energy deposition in the material. With support from numerical simulations of the hydrodynamic advance of the excited matter, consequences of optimized energy coupling relevant for applications in material processing are revealed. Despite the reduced sensitivity to intensity-dependent effects for linear materials, the overall absorption efficiency can be elevated if the proper conditions of density and temperature are met for the expanding material layers. In this respect, short sub-ps single pulse irradiation is compared with picosecond sequences. We show that in particular irradiation regimes, characterized by fluences superior to the material removal threshold, laser energy delivery extending on several picoseconds leads to significant superheating of the superficial layers as compared to femtosecond irradiation and to a swift acceleration of the emitted particles. Subsequently, the lifetime of the post-irradiation liquid layer is diminished, which, in turn, translates into a reduction in droplet ejection. In contrast, short pulse irradiation at moderate fluences generates a higher quantity of removed material that is ejected in a dense mixture of gas and liquid-phase particulates.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Defining the Stimulus - A Memoir
Terrace, Herbert
2010-01-01
The eminent psychophysicist, S. S. Stevens, once remarked that, “the basic problem of psychology was the definition of the stimulus” (Stevens, 1951, p. 46). By expanding the traditional definition of the stimulus, the study of animal learning has metamorphosed into animal cognition. The main impetus for that change was the recognition that it is often necessary to postulate a representation between the traditional S and R of learning theory. Representations allow a subject to re-present a stimulus it learned previously that is currently absent. Thus, in delayed-matching-to-sample, one has to assume that a subject responds to a representation of the sample during test if it responds correctly. Other examples, to name but a few, include concept formation, spatial memory, serial memory, learning a numerical rule, imitation and metacognition. Whereas a representation used to be regarded as a mentalistic phenomenon that was unworthy of scientific inquiry, it can now be operationally defined. To accommodate representations, the traditional discriminative stimulus has to be expanded to allow for the role of representations. The resulting composite can account for a significantly larger portion of the variance of performance measures than the exteroceptive stimulus could by itself. PMID:19969047
NASA Astrophysics Data System (ADS)
Hu, Hao; Lu, Zhenyu; Parks, Jerry M.; Burger, Steven K.; Yang, Weitao
2008-01-01
To accurately determine the reaction path and its energetics for enzymatic and solution-phase reactions, we present a sequential sampling and optimization approach that greatly enhances the efficiency of the ab initio quantum mechanics/molecular mechanics minimum free-energy path (QM/MM-MFEP) method. In the QM/MM-MFEP method, the thermodynamics of a complex reaction system is described by the potential of mean force (PMF) surface of the quantum mechanical (QM) subsystem with a small number of degrees of freedom, somewhat like describing a reaction process in the gas phase. The main computational cost of the QM/MM-MFEP method comes from the statistical sampling of conformations of the molecular mechanical (MM) subsystem required for the calculation of the QM PMF and its gradient. In our new sequential sampling and optimization approach, we aim to reduce the amount of MM sampling while still retaining the accuracy of the results by first carrying out MM phase-space sampling and then optimizing the QM subsystem in the fixed-size ensemble of MM conformations. The resulting QM optimized structures are then used to obtain more accurate sampling of the MM subsystem. This process of sequential MM sampling and QM optimization is iterated until convergence. The use of a fixed-size, finite MM conformational ensemble enables the precise evaluation of the QM potential of mean force and its gradient within the ensemble, thus circumventing the challenges associated with statistical averaging and significantly speeding up the convergence of the optimization process. To further improve the accuracy of the QM/MM-MFEP method, the reaction path potential method developed by Lu and Yang [Z. Lu and W. Yang, J. Chem. Phys. 121, 89 (2004)] is employed to describe the QM/MM electrostatic interactions in an approximate yet accurate way with a computational cost that is comparable to classical MM simulations. The new method was successfully applied to two example reaction processes, the
Hu Hao; Lu Zhenyu; Parks, Jerry M.; Burger, Steven K.; Yang Weitao
2008-01-21
To accurately determine the reaction path and its energetics for enzymatic and solution-phase reactions, we present a sequential sampling and optimization approach that greatly enhances the efficiency of the ab initio quantum mechanics/molecular mechanics minimum free-energy path (QM/MM-MFEP) method. In the QM/MM-MFEP method, the thermodynamics of a complex reaction system is described by the potential of mean force (PMF) surface of the quantum mechanical (QM) subsystem with a small number of degrees of freedom, somewhat like describing a reaction process in the gas phase. The main computational cost of the QM/MM-MFEP method comes from the statistical sampling of conformations of the molecular mechanical (MM) subsystem required for the calculation of the QM PMF and its gradient. In our new sequential sampling and optimization approach, we aim to reduce the amount of MM sampling while still retaining the accuracy of the results by first carrying out MM phase-space sampling and then optimizing the QM subsystem in the fixed-size ensemble of MM conformations. The resulting QM optimized structures are then used to obtain more accurate sampling of the MM subsystem. This process of sequential MM sampling and QM optimization is iterated until convergence. The use of a fixed-size, finite MM conformational ensemble enables the precise evaluation of the QM potential of mean force and its gradient within the ensemble, thus circumventing the challenges associated with statistical averaging and significantly speeding up the convergence of the optimization process. To further improve the accuracy of the QM/MM-MFEP method, the reaction path potential method developed by Lu and Yang [Z. Lu and W. Yang, J. Chem. Phys. 121, 89 (2004)] is employed to describe the QM/MM electrostatic interactions in an approximate yet accurate way with a computational cost that is comparable to classical MM simulations. The new method was successfully applied to two example reaction processes, the
Family Life and Human Development: Sample Units, K-6. Revised.
ERIC Educational Resources Information Center
Prince George's County Public Schools, Upper Marlboro, MD.
Sample unit outlines, designed for kindergarten through grade six, define the content, activities, and assessment tasks appropriate to specific grade levels. The units have been extracted from the Board-approved curriculum, Health Education: The Curricular Approach to Optimal Health. The instructional guidelines for grade one are: describing a…
Code of Federal Regulations, 2013 CFR
2013-01-01
.... See “Grade.” Condition. “Condition” means the degree of soundness of the product which may affect its merchantability and includes, but is not limited to those factors which are subject to change as a result of age... affected by one or more deviations or a sample unit that varies in a specifically defined manner from...
Obtaining representative ground water samples is important for site assessment and
remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
Rabban, Joseph T; Krasik, Ellen; Chen, Lee-May; Powell, Catherine B; Crawford, Beth; Zaloudek, Charles J
2009-12-01
exhaustive multistep level sectioning of all remaining tubal and ovarian blocks from both these women confirmed the original benign diagnosis in 1 woman but in the other woman, the deepest levels of 1 ovarian block revealed a single 1-mm nodule of cancer at the base of an ovarian surface epithelial invagination. This specimen was one of the first RRSO cases in our experience and on review of the original report, this ovary was not dissected into multiple slices along its short axis but was only bivalved along its long axis. We propose that there does not seem to be any diagnostic value in automatically performing multistep deeper level sections of RRSO specimens if the tissue is sectioned appropriately and if the specimen is sliced at intervals that are no more than 3 mm thick. Guidelines for evaluation of RRSO specimens should emphasize the use of an optimal dissection protocol and the importance of thin tissue slice intervals.
Noguerol-Arias, Joan; Rodríguez-Abalde, Angela; Romero-Merino, Eva; Flotats, Xavier
2012-07-03
This paper reports the development of an innovative sample preparation method for the determination of the chemical oxygen demand (COD) in heterogeneous solid or semisolid samples, with high suspended solids and COD concentrations, using an optimized closed reflux colorimetric method. The novel method, named solid dilution (SD), is based on a different technique of sample preparation, diluting the sample with magnesium sulfate (MgSO(4)) previous to COD determination. With this, it is possible to obtain a solid homogeneous mixture much more easily analyzable. Besides, a modification of concentration and ratio of reagents was optimized to make the closed reflux colorimetric method suitable for complex substrates with COD levels ranging from 5 to 2500 g O(2) kg(-1) TS. The optimized method has been tested with potassium hydrogen phthalate (KHP) as primary solid standard and using different solid or semiliquid substrates like pig slaughterhouse waste and sewage sludge, among others. Finally, the optimized method (SD/SM-CRC) was intensively tested in comparison to the standard titrimetric method (SM-ORT) using different certified reference materials (CRM). The developed method was found to give higher accuracy, 1.4% relative standard deviation (RSD) vs 10.4%, and bias of 2.8% vs 8.0%, in comparison to the standard open reflux titrimetric method.
NASA Astrophysics Data System (ADS)
Probst, Roland
In this thesis I show achievements for precision feedback control of objects inside micro-fluidic systems and for magnetically guided ferrofluids. Essentially, this is about doing flow control, but flow control on the microscale, and further even to nanoscale accuracy, to precisely and robustly manipulate micro and nano-objects (i.e. cells and quantum dots). Target applications include methods to miniaturize the operations of a biological laboratory (lab-on-a-chip), i.e. presenting pathogens to on-chip sensing cells or extracting cells from messy bio-samples such as saliva, urine, or blood; as well as non-biological applications such as deterministically placing quantum dots on photonic crystals to make multi-dot quantum information systems. The particles are steered by creating an electrokinetic fluid flow that carries all the particles from where they are to where they should be at each time step. The control loop comprises sensing, computation, and actuation to steer particles along trajectories. Particle locations are identified in real-time by an optical system and transferred to a control algorithm that then determines the electrode voltages necessary to create a flow field to carry all the particles to their next desired locations. The process repeats at the next time instant. I address following aspects of this technology. First I explain control and vision algorithms for steering single and multiple particles, and show extensions of these algorithms for steering in three dimensional (3D) spaces. Then I show algorithms for calculating power minimum paths for steering multiple particles in actuation constrained environments. With this microfluidic system I steer biological cells and nano particles (quantum dots) to nano meter precision. In the last part of the thesis I develop and experimentally demonstrate two dimensional (2D) manipulation of a single droplet of ferrofluid by feedback control of 4 external electromagnets, with a view towards enabling
Shamsipur, Mojtaba; Mirmohammadi, Mehrosadat
2014-11-01
Dispersive liquid-liquid microextraction (DLLME) coupled with high performance liquid chromatography by ultraviolet detection (HPLC-UV) as a fast and inexpensive technique was applied to the determination of imipramine and trimipramine in urine samples. Response surface methodology (RSM) was used for multivariate optimization of the effects of seven different parameters influencing the extraction efficiency of the proposed method. Under optimized experimental conditions, the enrichment factors and extraction recoveries were between 161.7-186.7 and 97-112%, respectively. The linear range and limit of detection for both analytes found to be 5-100ng mL(-1) and 0.6ng mL(-1), respectively. The relative standard deviations for 5ng mL(-1) of the drugs in urine samples were in the range of 5.1-6.1 (n=5). The developed method was successfully applied to real urine sample analyses.
Lee, John R.
1975-01-01
Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041
Clarifying and Defining Library Services.
ERIC Educational Resources Information Center
Shubert, Joseph F., Ed.; Josey, E. J., Ed.
1991-01-01
This issue presents articles which, in some way, help to clarify and define library services. It is hoped that this clarification in library service will serve to secure the resources libraries need to serve the people of New York. The following articles are presented: (1) Introduction: "Clarifying and Defining Library Services" (Joseph…
Woodruff, S P; Johnson, T R; Waits, L P
2015-07-01
Knowledge of population demographics is important for species management but can be challenging in low-density, wide-ranging species. Population monitoring of the endangered Sonoran pronghorn (Antilocapra americana sonoriensis) is critical for assessing the success of recovery efforts, and noninvasive DNA sampling (NDS) could be more cost-effective and less intrusive than traditional methods. We evaluated faecal pellet deposition rates and faecal DNA degradation rates to maximize sampling efficiency for DNA-based mark-recapture analyses. Deposition data were collected at five watering holes using sampling intervals of 1-7 days and averaged one pellet pile per pronghorn per day. To evaluate nuclear DNA (nDNA) degradation, 20 faecal samples were exposed to local environmental conditions and sampled at eight time points from one to 124 days. Average amplification success rates for six nDNA microsatellite loci were 81% for samples on day one, 63% by day seven, 2% by day 14 and 0% by day 60. We evaluated the efficiency of different sampling intervals (1-10 days) by estimating the number of successful samples, success rate of individual identification and laboratory costs per successful sample. Cost per successful sample increased and success and efficiency declined as the sampling interval increased. Results indicate NDS of faecal pellets is a feasible method for individual identification, population estimation and demographic monitoring of Sonoran pronghorn. We recommend collecting samples <7 days old and estimate that a sampling interval of four to seven days in summer conditions (i.e., extreme heat and exposure to UV light) will achieve desired sample sizes for mark-recapture analysis while also maximizing efficiency [Corrected].
Considerations and Challenges in Defining Optimal Iron Utilization in Hemodialysis
Pai, Amy Barton; Chan, Christopher T.; Coyne, Daniel W.; Hung, Adriana M.; Kovesdy, Csaba P.; Fishbane, Steven
2015-01-01
Trials raising concerns about erythropoiesis-stimulating agents, revisions to their labeling, and changes to practice guidelines and dialysis payment systems have provided strong stimuli to decrease erythropoiesis-stimulating agent use and increase intravenous iron administration in recent years. These factors have been associated with a rise in iron utilization, particularly among hemodialysis patients, and an unprecedented increase in serum ferritin concentrations. The mean serum ferritin concentration among United States dialysis patients in 2013 exceeded 800 ng/ml, with 18% of patients exceeding 1200 ng/ml. Although these changes are broad based, the wisdom of these practices is uncertain. Herein, we examine influences on and trends in intravenous iron utilization and assess the clinical trial, epidemiologic, and experimental evidence relevant to its safety and efficacy in the setting of maintenance dialysis. These data suggest a potential for harm from increasing use of parenteral iron in dialysis-dependent patients. In the absence of well powered, randomized clinical trials, available evidence will remain inadequate for making reliable conclusions about the effect of a ubiquitous therapy on mortality or other outcomes of importance to dialysis patients. Nephrology stakeholders have an urgent obligation to initiate well designed investigations of intravenous iron in order to ensure the safety of the dialysis population. PMID:25542967
Considerations and challenges in defining optimal iron utilization in hemodialysis.
Charytan, David M; Pai, Amy Barton; Chan, Christopher T; Coyne, Daniel W; Hung, Adriana M; Kovesdy, Csaba P; Fishbane, Steven
2015-06-01
Trials raising concerns about erythropoiesis-stimulating agents, revisions to their labeling, and changes to practice guidelines and dialysis payment systems have provided strong stimuli to decrease erythropoiesis-stimulating agent use and increase intravenous iron administration in recent years. These factors have been associated with a rise in iron utilization, particularly among hemodialysis patients, and an unprecedented increase in serum ferritin concentrations. The mean serum ferritin concentration among United States dialysis patients in 2013 exceeded 800 ng/ml, with 18% of patients exceeding 1200 ng/ml. Although these changes are broad based, the wisdom of these practices is uncertain. Herein, we examine influences on and trends in intravenous iron utilization and assess the clinical trial, epidemiologic, and experimental evidence relevant to its safety and efficacy in the setting of maintenance dialysis. These data suggest a potential for harm from increasing use of parenteral iron in dialysis-dependent patients. In the absence of well powered, randomized clinical trials, available evidence will remain inadequate for making reliable conclusions about the effect of a ubiquitous therapy on mortality or other outcomes of importance to dialysis patients. Nephrology stakeholders have an urgent obligation to initiate well designed investigations of intravenous iron in order to ensure the safety of the dialysis population.
Clasen, Julie; Mellerup, Anders; Olsen, John Elmerdahl; Angen, Øystein; Folkesson, Anders; Halasa, Tariq; Toft, Nils; Birkegård, Anna Camilla
2016-06-30
The primary objective of this study was to determine the minimum number of individual fecal samples to pool together in order to obtain a representative sample for herd level quantification of antimicrobial resistance (AMR) genes in a Danish pig herd, using a novel high-throughput qPCR assay. The secondary objective was to assess the agreement between different methods of sample pooling. Quantification of AMR was achieved using a high-throughput qPCR method to quantify the levels of seven AMR genes (ermB, ermF, sulI, sulII, tet(M), tet(O) and tet(W)). A large variation in the levels of AMR genes was found between individual samples. As the number of samples in a pool increased, a decrease in sample variation was observed. It was concluded that the optimal pooling size is five samples, as an almost steady state in the variation was observed when pooling this number of samples. Good agreement between different pooling methods was found and the least time-consuming method of pooling, by transferring feces from each individual sample to a tube using a 10μl inoculation loop and adding 3.5ml of PBS, approximating a 10% solution, can therefore be used in future studies.
Technology Transfer Automated Retrieval System (TEKTRAN)
DNA microarrays are promising high-throughput tools for multiple pathogen detection. Currently, the performance and cost of this platform has limited its broad application in identifying microbial contaminants in foods. In this study, an optimized custom DNA microarray with flexibility in design and...
Technology Transfer Automated Retrieval System (TEKTRAN)
A broad-specific and sensitive immunoassay for the detection of sulfonamides was developed by optimizing the conditions of an enzyme-linked immunosorbent assay (ELISA) in regard to different monoclonal antibodies (MAbs), assay format, immunoreagents, and several physicochemical factors (pH, salt, de...
Liau, An-Shu; Liu, Ju-Tsung; Lin, Li-Chan; Chiu, Yu-Chih; Shu, You-Ren; Tsai, Chung-Chen; Lin, Cheng-Huang
2003-06-24
The chiral separation of (+/-)-methamphetamine, (+/-)-methcathinone, (+/-)-ephedrine and (+/-)-pseudoephedrine by means of beta-cyclodextrine modified capillary electrophoresis is described. The distribution of enantiomers in clandestine tablets and urine samples were identified. Several electrophoretic parameters such as the concentration of beta-cyclodextrin, temperature, the applied voltage and the amount of organic solvent required for successful separation were optimized. The method, as described herein, represents a good complementary method to GC-MS for use in forensic and clinical analysis.
The Problem of Defining Intelligence.
ERIC Educational Resources Information Center
Lubar, David
1981-01-01
The major philosophical issues surrounding the concept of intelligence are reviewed with respect to the problems surrounding the process of defining and developing artificial intelligence (AI) in computers. Various current definitions and problems with these definitions are presented. (MP)
Tabibnejad, Mahsa; Alikhani, Mohammad Yousef; Arjomandzadegan, Mohammad; Hashemi, Seyed Hamid; Naseri, Zahra
2016-01-01
Background Brucellosis is a zoonosis disease which is widespread across the world. Objectives The aim of the present study is the evaluation of culture-negative blood samples. Materials and Methods A total of 100 patients with suspected brucellosis were included in this experimental study and given positive serological tests. Diagnosis was performed on patients with clinical symptoms of the disease, followed by the detection of a titer that was equal to or more than 1:160 (in endemic areas) by the standard tube agglutination method. Blood samples were cultured by a BACTEC 9050 system, and subsequently by Brucella agar. At the same time, DNA from all blood samples was extracted by Qiagen Kit Company (Qia Amp Mini Kit). A molecular assay of blood samples was carried out by detection of eryD transcriptase and bcsp 31 genes in specific double PCR reactions. The specificity of the primers was evaluated by DNA from pure and approved Brucella colonies found in the blood samples, by DNA from other bacteria, and by ordinary PCR. DNA extraction from the pure colonies was carried out by both Qiagen Kit and Chelex 100 methods; the two were compared. Results 39 cases (39%) had positive results when tested by the BACTEC system, and 61 cases (61%) became negative. 23 culture-positive blood samples were randomly selected for PCR reactions; all showed 491 bp for the eryD gene and 223 bp for the bcsp 31 gene. Interestingly, out of 14 culture-negative blood samples, 13 cases showed positive bonds in PCR. The specificity of the PCR method was equal to 100%. DNA extraction from pure cultures was done by both Chelex 100 and Qiagen Kit; these showed the same results for all samples. Conclusions The results prove that the presented double PCR method could be used to detect positive cases from culture-negative blood samples. The Chelex 100 method is simpler and safer than the use of Qiagen Kit for DNA extraction. PMID:27330831
NASA Astrophysics Data System (ADS)
Nieminen, Teemu; Lähteenmäki, Pasi; Tan, Zhenbing; Cox, Daniel; Hakonen, Pertti J.
2016-11-01
We present a microwave correlation measurement system based on two low-cost USB-connected software defined radio dongles modified to operate as coherent receivers by using a common local oscillator. Existing software is used to obtain I/Q samples from both dongles simultaneously at a software tunable frequency. To achieve low noise, we introduce an easy low-noise solution for cryogenic amplification at 600-900 MHz based on single discrete HEMT with 21 dB gain and 7 K noise temperature. In addition, we discuss the quantization effects in a digital correlation measurement and determination of optimal integration time by applying Allan deviation analysis.
Nieminen, Teemu; Lähteenmäki, Pasi; Tan, Zhenbing; Cox, Daniel; Hakonen, Pertti J
2016-11-01
We present a microwave correlation measurement system based on two low-cost USB-connected software defined radio dongles modified to operate as coherent receivers by using a common local oscillator. Existing software is used to obtain I/Q samples from both dongles simultaneously at a software tunable frequency. To achieve low noise, we introduce an easy low-noise solution for cryogenic amplification at 600-900 MHz based on single discrete HEMT with 21 dB gain and 7 K noise temperature. In addition, we discuss the quantization effects in a digital correlation measurement and determination of optimal integration time by applying Allan deviation analysis.
Increased taxon sampling greatly reduces phylogenetic error.
Zwickl, Derrick J; Hillis, David M
2002-08-01
Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall
Huhn, Carolin; Pütz, Michael; Holthausen, Ivie; Pyell, Ute
2008-01-01
A micellar electrokinetic chromatographic method using UV and (UV)LIF detection in-line was developed for the determination of aromatic constituents, mainly allylbenzenes in essential oils. The method optimization included the optimization of the composition of the separation electrolyte using ACN and urea to reduce retention factors and CaCl(2) to widen the migration time window. In addition, it was necessary to optimize the composition of the sample solution which included the addition of a neutral surfactant at high concentration. With the optimized method, the determination of minor constituents in essential oils was possible despite of the presence of a structurally related compound being in a molar ratio excess of 1000:1. The use of UV and LIF-detection in-line enabled the direct comparison of the two detection traces using an electrophoretic mobility x-axis instead of the normal time-based scale. This simplifies the assignment of signals and enhances repeatability. The method developed was successfully applied to the determination of minor and major constituents in herbal essential oils, some of them being forensically relevant as sources of precursors for synthetic drugs.
ERIC Educational Resources Information Center
Brisco, Nicole D.
2010-01-01
In the author's art class, she found that many of her students in an intro art class have some technical skill, but lack the ability to think conceptually. Her goal was to create an innovative project that combined design, painting, and sculpture into a compact unit that asked students how they define themselves. In the process of answering this…
Defining and Measuring Psychomotor Performance
ERIC Educational Resources Information Center
Autio, Ossi
2007-01-01
Psychomotor performance is fundamental to human existence. It is important in many real world activities and nowadays psychomotor tests are used in several fields of industry, army, and medical sciences in employee selection. This article tries to define psychomotor activity by introducing some psychomotor theories. Furthermore the…
Defining "Folklore" in the Classroom.
ERIC Educational Resources Information Center
Falke, Anne
Folklore, a body of traditional beliefs of a people conveyed orally or by means of custom, is very much alive, involves all people, and is not the study of popular culture. In studying folklore, the principal tasks of the folklorist have been defined as determining definition, classification, source (the folk), origin (who composed folklore),…
NASA Astrophysics Data System (ADS)
Dietze, M. C.; Davidson, C. D.; Desai, A. R.; Feng, X.; Kelly, R.; Kooper, R.; LeBauer, D. S.; Mantooth, J.; McHenry, K.; Serbin, S. P.; Wang, D.
2012-12-01
Ecosystem models are designed to synthesize our current understanding of how ecosystems function and to predict responses to novel conditions, such as climate change. Reducing uncertainties in such models can thus improve both basic scientific understanding and our predictive capacity, but rarely have the models themselves been employed in the design of field campaigns. In the first part of this paper we provide a synthesis of uncertainty analyses conducted using the Predictive Ecosystem Analyzer (PEcAn) ecoinformatics workflow on the Ecosystem Demography model v2 (ED2). This work spans a number of projects synthesizing trait databases and using Bayesian data assimilation techniques to incorporate field data across temperate forests, grasslands, agriculture, short rotation forestry, boreal forests, and tundra. We report on a number of data needs that span a wide array diverse biomes, such as the need for better constraint on growth respiration. We also identify other data needs that are biome specific, such as reproductive allocation in tundra, leaf dark respiration in forestry and early-successional trees, and root allocation and turnover in mid- and late-successional trees. Future data collection needs to balance the unequal distribution of past measurements across biomes (temperate biased) and processes (aboveground biased) with the sensitivities of different processes. In the second part we present the development of a power analysis and sampling optimization module for the the PEcAn system. This module uses the results of variance decomposition analyses to estimate the further reduction in model predictive uncertainty for different sample sizes of different variables. By assigning a cost to each measurement type, we apply basic economic theory to optimize the reduction in model uncertainty for any total expenditure, or to determine the cost required to reduce uncertainty to a given threshold. Using this system we find that sampling switches among multiple
Mitsika, Elena E; Christophoridis, Christophoros; Fytianos, Konstantinos
2013-11-01
The aims of this study were (a) to evaluate the degradation of acetamiprid with the use of Fenton reaction, (b) to investigate the effect of different concentrations of H2O2 and Fe(2+), initial pH and various iron salts, on the degradation of acetamiprid and (c) to apply response surface methodology for the evaluation of degradation kinetics. The kinetic study revealed a two-stage process, described by pseudo- first and second order kinetics. Different H2O2:Fe(2+) molar ratios were examined for their effect on acetamiprid degradation kinetics. The ratio of 3 mg L(-1) Fe(2+): 40 mg L(-1) H2O2 was found to completely remove acetamiprid at less than 10 min. Degradation rate was faster at lower pH, with the optimal value at pH 2.9, while Mohr salt appeared to degrade acetamiprid faster. A central composite design was selected in order to observe the effects of Fe(2+) and H2O2 initial concentration on acetamiprid degradation kinetics. A quadratic model fitted the experimental data, with satisfactory regression and fit. The most significant effect on the degradation of acetamiprid, was induced by ferrous iron concentration followed by H2O2. Optimization, aiming to minimize the applied ferrous concentration and the process time, proposed a ratio of 7.76 mg L(-1) Fe(II): 19.78 mg L(-1) H2O2. DOC is reduced much more slowly and requires more than 6h of processing for 50% degradation. The use to zero valent iron, demonstrated fast kinetic rates with acetamiprid degradation occurring in 10 min and effective DOC removal.
Kopka, Julieta; Leder, Monika; Jaureguiberry, Stella M; Brem, Gottfried; Boselli, Gabriel O
2011-09-01
Obtaining complete short tandem repeat (STR) profiles from fingerprints containing minimal amounts of DNA, using standard extraction techniques, can be difficult. The aim of this study was to evaluate a new kit, Fingerprint DNA Finder (FDF Kit), recently launched for the extraction of DNA and STR profiling from fingerprints placed on a special device known as Self-Adhesive Security Seal Sticker(®) and other latent fingerprints on forensic evidentiary material like metallic guns. The DNA extraction system is based on a reversal of the silica principle, and all the potential inhibiting substances are retained on the surface of a special adsorbent, while nucleic acids are not bound and remain in solution dramatically improving DNA recovery. DNA yield was quite variable among the samples tested, rendering in most of the cases (>90%) complete STR profiles, free of PCR inhibitors, and devoid of artifacts. Even samples with DNA amount below 100 pg could be successfully analyzed.
Bashiry, Moein; Mohammadi, Abdorreza; Hosseini, Hedayat; Kamankesh, Marzieh; Aeenehvand, Saeed; Mohammadi, Zaniar
2016-01-01
A novel method based on microwave-assisted extraction and dispersive liquid-liquid microextraction (MAE-DLLME) followed by high-performance liquid chromatography (HPLC) was developed for the determination of three polyamines from turkey breast meat samples. Response surface methodology (RSM) based on central composite design (CCD) was used to optimize the effective factors in DLLME process. The optimum microextraction efficiency was obtained under optimized conditions. The calibration graphs of the proposed method were linear in the range of 20-200 ng g(-1), with the coefficient determination (R(2)) higher than 0.9914. The relative standard deviations were 6.72-7.30% (n = 7). The limits of detection were in the range of 0.8-1.4 ng g(-1). The recoveries of these compounds in spiked turkey breast meat samples were from 95% to 105%. The increased sensitivity in using the MAE-DLLME-HPLC-UV has been demonstrated. Compared with previous methods, the proposed method is an accurate, rapid and reliable sample-pretreatment method.
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant. [Box: see text].
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian
1988-01-01
The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.
Defining Our National Cyberspace Boundaries
2010-02-17
invention of the World Wide Web in 19 1989, the Internet Corporation for Assigned Names and Numbers ( ICANN ) (the international organization that...anonymity in cyberspace could be accomplished through the issuing of IP addresses as the Internet transitions from IPv4 to IPv6. ICANN should issue...agreement (MOA) between the U.S. Department of Commerce and ICANN . This new MOA should define which blocks of IP addresses will be used for entities
How to define green adjuvants.
Beck, Bert; Steurbaut, Walter; Spanoghe, Pieter
2012-08-01
The concept 'green adjuvants' is difficult to define. This paper formulates an answer based on two approaches. Starting from the Organisation for Economic Cooperation and Development (OECD) definition for green chemistry, production-based and environmental-impact-based definitions for green adjuvants are proposed. According to the production-based approach, adjuvants are defined as green if they are manufactured using renewable raw materials as much as possible while making efficient use of energy, preferably renewable energy. According to the environmental impact approach, adjuvants are defined as green (1) if they have a low human and environmental impact, (2) if they do not increase active ingredient environmental mobility and/or toxicity to humans and non-target organisms, (3) if they do not increase the exposure to these active substances and (4) if they lower the impact of formulated pesticides by enhancing the performance of active ingredients, thus potentially lowering the required dosage of active ingredients. Based on both approaches, a tentative definition for 'green adjuvants' is given, and future research and legislation directions are set out.
Pooralhossini, Jaleh; Ghaedi, Mehrorang; Zanjanchi, Mohammad Ali; Asfaram, Arash
2017-01-01
A sensitive procedure namely ultrasound-assisted (UA) coupled dispersive nano solid-phase microextraction spectrophotometry (DNSPME-UV-Vis) was designed for preconcentration and subsequent determination of gallic acid (GA) from water samples, while the detailed of composition and morphology and also purity and structure of this new sorbent was identified by techniques like field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and Energy-dispersive X-ray spectroscopy (EDX) techniques. Among conventional parameters viz. pH, amount of sorbent, sonication time and volume of elution solvent based on Response Surface Methodology (RSM) and central composite design according to statistics based contour the best operational conditions was set at pH of 2.0; 1.5mg sorbent, 4.0min sonication and 150μL ethanol. Under these pre-qualified conditions the method has linear response over wide concentration range of 15-6000ngmL(-1) with a correlation coefficient of 0.9996. The good figure of merits like acceptable LOD (S/N=3) and LOQ (S/N=10) with numerical value of 2.923 and 9.744ngmL(-1), respectively and relative recovery between 95.54 and 100.02% show the applicability and efficiency of this method for real samples analysis with RSDs below 6.0%. Finally the method with good performance were used for monitoring under study analyte in various real samples like tap, river and mineral waters.
Amole, Carolyn D; Brisebois, Catherine; Essajee, Shaffiq; Koehler, Erin; Levin, Andrew D; Moore, Meredith C; Brown Ripin, David H; Sickler, Joanna J; Singh, Inder R
2011-08-01
Over the last decade, increased funding to support HIV treatment programs has enabled millions of new patients in developing countries to access the medications they need. Today, although demand for antiretrovirals continues to grow, the financial crisis has severely constrained funding leaving countries with difficult choices on program prioritization. Product optimization is one solution countries can pursue to continue to improve patient care while also uncovering savings that can be used for further scale up or other health system needs. Program managers can make procurement decisions that actually reduce program costs by considering additional factors beyond World Health Organization guidelines when making procurement decisions. These include in-country product availability, convenience, price, and logistics such as supply chain implications and laboratory testing requirements. Three immediate product selection opportunities in the HIV space include using boosted atazanavir in place of lopinovir for second-line therapy, lamivudine instead of emtricitabine in both first-line and second-line therapy, and tenofovir + lamivudine over abacavir + didanosine in second-line therapy. If these 3 opportunities were broadly implemented in sub-Saharan Africa and India today, approximately $300 million of savings would be realized over the next 5 years, enabling hundreds of thousands of additional patients to be treated. Although the discussion herein is specific to antriretrovirals, the principles of product selection are generalizable to diseases with multiple treatment options and fungible commodity procurement. Identifying and implementing approaches to overcome health system inefficiencies will help sustain and may expand quality care in resource-limited settings.
Yeşiller, Semira Unal; Yalçın, Serife
2013-04-03
A laser induced breakdown spectrometry hyphenated with on-line continuous flow hydride generation sample introduction system, HG-LIBS, has been used for the determination of arsenic, antimony, lead and germanium in aqueous environments. Optimum chemical and instrumental parameters governing chemical hydride generation, laser plasma formation and detection were investigated for each element under argon and nitrogen atmosphere. Arsenic, antimony and germanium have presented strong enhancement in signal strength under argon atmosphere while lead has shown no sensitivity to ambient gas type. Detection limits of 1.1 mg L(-1), 1.0 mg L(-1), 1.3 mg L(-1) and 0.2 mg L(-1) were obtained for As, Sb, Pb and Ge, respectively. Up to 77 times enhancement in detection limit of Pb were obtained, compared to the result obtained from the direct analysis of liquids by LIBS. Applicability of the technique to real water samples was tested through spiking experiments and recoveries higher than 80% were obtained. Results demonstrate that, HG-LIBS approach is suitable for quantitative analysis of toxic elements and sufficiently fast for real time continuous monitoring in aqueous environments.
A simple method for defining malaria seasonality
2009-01-01
Background There is currently no standard way of defining malaria seasonality, resulting in a wide range of definitions reported in the literature. Malaria cases show seasonal peaks in most endemic settings, and the choice and timing for optimal malaria control may vary by seasonality. A simple approach is presented to describe the seasonality of malaria, to aid localized policymaking and targeting of interventions. Methods A series of systematic literature reviews were undertaken to identify studies reporting on monthly data for full calendar years on clinical malaria, hospital admission with malaria and entomological inoculation rates (EIR). Sites were defined as having 'marked seasonality' if 75% or more of all episodes occurred in six or less months of the year. A 'concentrated period of malaria' was defined as the six consecutive months with the highest cumulative proportion of cases. A sensitivity analysis was performed based on a variety of cut-offs. Results Monthly data for full calendar years on clinical malaria, all hospital admissions with malaria, and entomological inoculation rates were available for 13, 18, and 11 sites respectively. Most sites showed year-round transmission with seasonal peaks for both clinical malaria and hospital admissions with malaria, with a few sites fitting the definition of 'marked seasonality'. For these sites, consistent results were observed when more than one outcome or more than one calendar year was available from the same site. The use of monthly EIR data was found to be of limited value when looking at seasonal variations of malaria transmission, particularly at low and medium intensity levels. Conclusion The proposed definition discriminated well between studies with 'marked seasonality' and those with less seasonality. However, a poor fit was observed in sites with two seasonal peaks. Further work is needed to explore the applicability of this definition on a wide-scale, using routine health information system data
Metzger, Stefan; Burba, George; Burns, Sean P.; ...
2016-03-31
Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation variesmore » with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5–16.5 Hz for CO2, 2.4–14.3 Hz for H2O, and 8.3–21.8 Hz for CO2, 1.4–19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor–capacitor theory, and NEON's final gas sampling system was
Harris, Stephanie A; Satti, Iman; Matsumiya, Magali; Stockdale, Lisa; Chomka, Agnieszka; Tanner, Rachel; O'Shea, Matthew K; Manjaly Thomas, Zita-Rose; Tameris, Michele; Mahomed, Hassan; Scriba, Thomas J; Hanekom, Willem A; Fletcher, Helen A; McShane, Helen
2014-07-01
The first phase IIb safety and efficacy trial of a new tuberculosis vaccine since that for BCG was completed in October 2012. BCG-vaccinated South African infants were randomized to receive modified vaccinia virus Ankara, expressing the Mycobacterium tuberculosis antigen 85A (MVA85A), or placebo. MVA85A did not significantly boost the protective effect of BCG. Cryopreserved samples provide a unique opportunity for investigating the correlates of the risk of tuberculosis disease in this population. Due to the limited amount of sample available from each infant, preliminary work was necessary to determine which assays and conditions give the most useful information. Peripheral blood mononuclear cells (PBMC) were stimulated with antigen 85A (Ag85A) and purified protein derivative from M. tuberculosis in an ex vivo gamma interferon (IFN-γ) enzyme-linked immunosorbent spot assay (ELISpot) and a Ki67 proliferation assay. The effects of a 2-h or overnight rest of thawed PBMC on ELISpot responses and cell populations were determined. Both the ELISpot and Ki67 assays detected differences between the MVA85A and placebo groups, and the results correlated well. The cell numbers and ELISpot responses decreased significantly after an overnight rest, and surface flow cytometry showed a significant loss of CD4(+) and CD8(+) T cells. Of the infants tested, 50% had a positive ELISpot response to a single pool of flu, Epstein-Barr virus (EBV), and cytomegalovirus (CMV) (FEC) peptides. This pilot work has been essential in determining the assays and conditions to be used in the correlate study. Moving forward, PBMC will be rested for 2 h before assay setup. The ELISpot assay, performed in duplicate, will be selected over the Ki67 assay, and further work is needed to evaluate the effect of high FEC responses on vaccine-induced immunity and susceptibility to tuberculosis disease.
Homotopy optimization methods for global optimization.
Dunlavy, Daniel M.; O'Leary, Dianne P. (University of Maryland, College Park, MD)
2005-12-01
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Defining life: the virus viewpoint.
Forterre, Patrick
2010-04-01
Are viruses alive? Until very recently, answering this question was often negative and viruses were not considered in discussions on the origin and definition of life. This situation is rapidly changing, following several discoveries that have modified our vision of viruses. It has been recognized that viruses have played (and still play) a major innovative role in the evolution of cellular organisms. New definitions of viruses have been proposed and their position in the universal tree of life is actively discussed. Viruses are no more confused with their virions, but can be viewed as complex living entities that transform the infected cell into a novel organism-the virus-producing virions. I suggest here to define life (an historical process) as the mode of existence of ribosome encoding organisms (cells) and capsid encoding organisms (viruses) and their ancestors. I propose to define an organism as an ensemble of integrated organs (molecular or cellular) producing individuals evolving through natural selection. The origin of life on our planet would correspond to the establishment of the first organism corresponding to this definition.
Ranjbari, Elias; Hadjmohammadi, Mohammad Reza
2015-07-01
An exact, rapid and efficient method for the extraction of rhodamine B (RB) and rhodamine 6G (RG) as well as their determination in three different matrices was developed using magnetic stirring assisted dispersive liquid-liquid microextraction (MSA-DLLME) and HPLC-Vis. 1-Octanol and acetone were selected as the extraction and dispersing solvents, respectively. The potentially variables were the volume of extraction and disperser solvents, pH of sample solution, salt effect, temperature, stirring rate and vortex time in the optimization process. A methodology based on fractional factorial design (2(7)(-2)) was carried out to choose the significant variables for the optimization. Then, the significant factors (extraction solvent volume, pH of sample solution, temperature, stirring rate) were optimized using a central composite design (CCD). A quadratic model between dependent and independent variables was built. Under the optimum conditions (extraction solvent volume=1050µL, pH=2, temperature=35°C and stirring rate=1500rpm), the calibration curves showed high levels of linearity (R(2)=0.9999) for RB and RG in the ranges of 5-1000ngmL(-1) and 7.5-1000ngmL(-1), respectively. The obtained extraction recoveries for 100ngmL(-1) of RB and RG standard solutions were 100% and 97%, and preconcentration factors were 48 and 46, respectively. While the limit of detection was 1.15ngmL(-1) for RB, it was 1.23ngmL(-1) for RG. Finally, the MSA-DLLME method was successfully applied for preconcentration and trace determination of RB and RG in different matrices of environmental waters, soft drink and cosmetic products.
Asfaram, Arash; Ghaedi, Mehrorang; Dashtian, Kheibar
2017-01-01
Ultrasound-assisted dispersive solid phase microextraction followed by UV-vis spectrophotometer (UA-DSPME-UV-vis) was designed for extraction and preconcentration of nicotinamide (vitamin B3) by HKUST-1 metal organic framework (MOF) based molecularly imprinted polymer (MIP). This new material was characterized by FTIR and FE-SEM techniques. The preliminary Plackett-Burman design was used for screening and subsequently the central composite design justifies significant terms and possible construction of mathematical equation which give the individual and cooperative contribution of variables like HKUST-1-MOF-NA-MIP mass, sonication time, temperature, eluent volume, pH and vortex time. Accordingly the optimum condition was set as: 2.0mg HKUST-1-MOF-NA-MIP, 200μL eluent and 5.0min sonication time in center points other variables were determined as the best conditions to reach the maximum recovery of the analyte. The UA-DSPME-UV-vis method performances like excellent linearity (LR), limits of detection (LOD), limits of quantification of 10-5000μgL(-1) with R(2) of 0.99, LOD (1.96ngmL(-1)), LOQ (6.53μgL(-1)), respectively show successful and accurate applicability of the present method for monitoring analytes with within- and between-day precision of 0.96-3.38%. The average absolute recoveries of the nicotinamide extracted from the urine, milk and water samples were 95.85-101.27%.
Johnston, Lisa Grazina; Whitehead, Sara; Simic-Lawson, Milena; Kendall, Carl
2010-06-01
Respondent-driven sampling (RDS) is widely adopted as a method to assess HIV and other sexually transmitted infection prevalence and risk factors among hard-to-reach populations. Failures to properly implement RDS in several settings could potentially have been avoided, had formative research been conducted. However, to date there is no published literature addressing the use of formative research in preparing for RDS studies. This paper uses examples from Banja Luka, Bosnia and Herzegovina; Bangkok, Thailand; Podgorica, Montenegro; and St Vincent's and Grenadine Islands, Eastern Caribbean; among populations of men who have sex with men, female sex workers, and injecting drug users to describe how formative research was used to plan, implement, and predict outcomes of RDS surveys and to provide a template of RDS-specific questions for conducting formative research in preparation for RDS surveys. We outline case studies to illustrate how formative research may help researchers to determine whether RDS methodology is appropriate for a particular population and sociocultural context, and to decide on implementation details that lead to successful study outcomes.
Defining Life: Synthesis and Conclusions
NASA Astrophysics Data System (ADS)
Gayon, Jean
2010-04-01
The first part of the paper offers philosophical landmarks on the general issue of defining life. §1 defends that the recognition of “life” has always been and remains primarily an intuitive process, for the scientist as for the layperson. However we should not expect, then, to be able to draw a definition from this original experience, because our cognitive apparatus has not been primarily designed for this. §2 is about definitions in general. Two kinds of definition should be carefully distinguished: lexical definitions (based upon current uses of a word), and stipulative or legislative definitions, which deliberately assign a meaning to a word, for the purpose of clarifying scientific or philosophical arguments. The present volume provides examples of these two kinds of definitions. §3 examines three traditional philosophical definitions of life, all of which have been elaborated prior to the emergence of biology as a specific scientific discipline: life as animation (Aristotle), life as mechanism, and life as organization (Kant). All three concepts constitute a common heritage that structures in depth a good deal of our cultural intuitions and vocabulary any time we try to think about “life”. The present volume offers examples of these three concepts in contemporary scientific discourse. The second part of the paper proposes a synthesis of the major debates developed in this volume. Three major questions have been discussed. A first issue (§4) is whether we should define life or not, and why. Most authors are skeptical about the possibility of defining life in a strong way, although all admit that criteria are useful in contexts such as exobiology, artificial life and the origins of life. §5 examines the possible kinds of definitions of life presented in the volume. Those authors who have explicitly defended that a definition of life is needed, can be classified into two categories. The first category (or standard view) refers to two conditions
Defining Characteristics of Creative Women
ERIC Educational Resources Information Center
Bender, Sarah White; Nibbelink, BradyLeigh; Towner-Thyrum, Elizabeth; Vredenburg, Debra
2013-01-01
This study was an effort to identify correlates of creativity in women. A sample of 447 college students were given the picture completion subtest of the Torrance Test of Creative Thinking, the "How Do You Think Test," the Revised NEO Personality Inventory, the Multidimensional Self-Esteem Inventory, the Family Environment Scale, and the…
Ritchie, Andrew W; Webb, Lauren J
2013-10-03
Continuum electrostatics methods are commonly used to calculate electrostatic potentials in proteins and at protein-protein interfaces to aid many types of biophysical studies. Despite their ubiquity throughout the biophysical literature, these calculations are difficult to test against experimental data to determine their accuracy and validity. To address this, we have calculated the Boltzmann-weighted electrostatic field at the midpoint of a nitrile bond placed at a variety of locations on the surface of the protein RalGDS, both in its monomeric form as well as when docked to four different constructs of the protein Rap, and compared the computation results to vibrational absorption energy measurements of the nitrile oscillator. This was done by generating a statistical ensemble of protein structures using enhanced molecular dynamics sampling with the Amber03 force field, followed by solving the linear Poisson-Boltzmann equation for each structure using the Applied Poisson-Boltzmann Solver (APBS) software package. Using a two-stage focusing strategy, we examined numerous second stage box dimensions, grid point densities, box locations, and compared the numerical result to the result obtained from the sum of the numeric reaction field and the analytic Coulomb field. It was found that the reaction field method yielded higher correlation with experiment for the absolute calculation of fields, while the numeric solutions yielded higher correlation with experiment for the relative field calculations. Finer grid spacing typically improved the calculation, although this effect was less pronounced in the reaction field method. These sorts of calculations were also very sensitive to the box location, particularly for the numeric calculations of absolute fields using a 10(3) Å(3) box.
Hamiltonians defined by biorthogonal sets
NASA Astrophysics Data System (ADS)
Bagarello, Fabio; Bellomonte, Giorgia
2017-04-01
In some recent papers, studies on biorthogonal Riesz bases have found renewed motivation because of their connection with pseudo-Hermitian quantum mechanics, which deals with physical systems described by Hamiltonians that are not self-adjoint but may still have real point spectra. Also, their eigenvectors may form Riesz, not necessarily orthonormal, bases for the Hilbert space in which the model is defined. Those Riesz bases allow a decomposition of the Hamiltonian, as already discussed in some previous papers. However, in many physical models, one has to deal not with orthonormal bases or with Riesz bases, but just with biorthogonal sets. Here, we consider the more general concept of G -quasi basis, and we show a series of conditions under which a definition of non-self-adjoint Hamiltonian with purely point real spectra is still possible.
Defining biocultural approaches to conservation.
Gavin, Michael C; McCarter, Joe; Mead, Aroha; Berkes, Fikret; Stepp, John Richard; Peterson, Debora; Tang, Ruifei
2015-03-01
We contend that biocultural approaches to conservation can achieve effective and just conservation outcomes while addressing erosion of both cultural and biological diversity. Here, we propose a set of guidelines for the adoption of biocultural approaches to conservation. First, we draw lessons from work on biocultural diversity and heritage, social-ecological systems theory, integrated conservation and development, co-management, and community-based conservation to define biocultural approaches to conservation. Second, we describe eight principles that characterize such approaches. Third, we discuss reasons for adopting biocultural approaches and challenges. If used well, biocultural approaches to conservation can be a powerful tool for reducing the global loss of both biological and cultural diversity.
Energy Velocity Defined by Brillouin
NASA Astrophysics Data System (ADS)
Hosono, Hiroyuki; Hosono, Toshio
The physical meaning of the energy velocity in lossy Lorentz media is clarified. First, two expressions for the energy velocity, one by Brillouin and another by Diener, are examined. We show that, while Diener's is disqualified, Brillouin's is acceptable as energy velocity. Secondly, we show that the signal velocity defined by Brillouin and Baerwald is exactly identical with the Brillouin's energy velocity. Thirdly, by using triangle-modulated harmonic wave, we show that the superluminal group velocity plays its role as a revelator only after the arrival of the signal traveling at the subluminal energy velocity. In short, nothing moves at the group velocity, and every frequency component of a signal propagates at its own energy velocity.
Miniature EVA Software Defined Radio
NASA Technical Reports Server (NTRS)
Pozhidaev, Aleksey
2012-01-01
As NASA embarks upon developing the Next-Generation Extra Vehicular Activity (EVA) Radio for deep space exploration, the demands on EVA battery life will substantially increase. The number of modes and frequency bands required will continue to grow in order to enable efficient and complex multi-mode operations including communications, navigation, and tracking applications. Whether conducting astronaut excursions, communicating to soldiers, or first responders responding to emergency hazards, NASA has developed an innovative, affordable, miniaturized, power-efficient software defined radio that offers unprecedented power-efficient flexibility. This lightweight, programmable, S-band, multi-service, frequency- agile EVA software defined radio (SDR) supports data, telemetry, voice, and both standard and high-definition video. Features include a modular design, an easily scalable architecture, and the EVA SDR allows for both stationary and mobile battery powered handheld operations. Currently, the radio is equipped with an S-band RF section. However, its scalable architecture can accommodate multiple RF sections simultaneously to cover multiple frequency bands. The EVA SDR also supports multiple network protocols. It currently implements a Hybrid Mesh Network based on the 802.11s open standard protocol. The radio targets RF channel data rates up to 20 Mbps and can be equipped with a real-time operating system (RTOS) that can be switched off for power-aware applications. The EVA SDR's modular design permits implementation of the same hardware at all Network Nodes concept. This approach assures the portability of the same software into any radio in the system. It also brings several benefits to the entire system including reducing system maintenance, system complexity, and development cost.
Asadollahzadeh, Mehdi; Tavakoli, Hamed; Torab-Mostaedi, Meisam; Hosseini, Ghaffar; Hemmati, Alireza
2014-06-01
Dispersive-solidification liquid-liquid microextraction (DSLLME) coupled with electrothermal atomic absorption spectrometry (ETAAS) was developed for preconcentration and determination of inorganic arsenic (III, V) in water samples. At pH=1, As(III) formed complex with ammonium pyrrolidine dithiocarbamate (APDC) and extracted into the fine droplets of 1-dodecanol (extraction solvent) which were dispersed with ethanol (disperser solvent) into the water sample solution. After extraction, the organic phase was separated by centrifugation, and was solidified by transferring into an ice bath. The solidified solvent was transferred to a conical vial and melted quickly at room temperature. As(III) was determined in the melted organic phase while As(V) remained in the aqueous layer. Total inorganic As was determined after the reduction of the pentavalent forms of arsenic with sodium thiosulphate and potassium iodide. As(V) was calculated by difference between the concentration of total inorganic As and As(III). The variable of interest in the DSLLME method, such as the volume of extraction solvent and disperser solvent, pH, concentration of APDC (chelating agent), extraction time and salt effect, was optimized with the aid of chemometric approaches. First, in screening experiments, fractional factorial design (FFD) was used for selecting the variables which significantly affected the extraction procedure. Afterwards, the significant variables were optimized using response surface methodology (RSM) based on central composite design (CCD). In the optimum conditions, the proposed method has been successfully applied to the determination of inorganic arsenic in different environmental water samples and certified reference material (NIST RSM 1643e).
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Saeidi, Iman; Barfi, Behruz; Asghari, Alireza; Gharahbagh, Abdorreza Alavi; Barfi, Azadeh; Peyrovi, Moazameh; Afsharzadeh, Maryam; Hojatinasab, Mostafa
2015-10-01
A novel and environmentally friendly ionic-liquid-based hollow-fiber liquid-phase microextraction method combined with a hybrid artificial neural network (ANN)-genetic algorithm (GA) strategy was developed for ferro and ferric ions speciation as model analytes. Different parameters such as type and volume of extraction solvent, amounts of chelating agent, volume and pH of sample, ionic strength, stirring rate, and extraction time were investigated. Much more effective parameters were firstly examined based on one-variable-at-a-time design, and obtained results were used to construct an independent model for each parameter. The models were then applied to achieve the best and minimum numbers of candidate points as inputs for the ANN process. The maximum extraction efficiencies were achieved after 9 min using 22.0 μL of 1-hexyl-3-methylimidazolium hexafluorophosphate ([C6MIM][PF6]) as the acceptor phase and 10 mL of sample at pH = 7.0 containing 64.0 μg L(-1) of benzohydroxamic acid (BHA) as the complexing agent, after the GA process. Once optimized, analytical performance of the method was studied in terms of linearity (1.3-316 μg L(-1), R (2) = 0.999), accuracy (recovery = 90.1-92.3%), and precision (relative standard deviation (RSD) <3.1). Finally, the method was successfully applied to speciate the iron species in the environmental and wastewater samples.
Es'haghi, Zarrin; Ebrahimi, Mahmoud; Hosseini, Mohammad-Saeid
2011-05-27
A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.
Luo, Qian; Chen, Xichao; Wei, Zi; Xu, Xiong; Wang, Donghong; Wang, Zijian
2014-10-24
When iodide and natural organic matter are present in raw water, the formation of iodo-trihalomethanes (Iodo-THMs), haloacetonitriles (HANs), and halonitromethanes (HNMs) pose a potential health risk because they have been reported to be more toxic than their brominated or chlorinated analogs. In the work, simultaneous analysis of Iodo-THMs, HANs, and HNMs in drinking water samples in a single cleanup and chromatographic analysis was proposed. The DVB/CAR/PDMS fiber was found to be the most suitable for all target compounds, although 75μm CAR/PDMS was better for chlorinated HANs and 65μm PDMS/DVB for brominated HNMs. After optimization of the SPME parameters (DVB/CAR/PDMS fiber, extraction time of 30min at 40°C, addition of 40% w/v of salt, (NH4)2SO4 as a quenching agent, and desorption time of 3min at 170°C), detection limits ranged from 1 to 50ng/L for different analogs, with a linear range of at least two orders of magnitude. Good recoveries (78.6-104.7%) were obtained for spiked samples of a wide range of treated drinking waters, demonstrating that the method is applicable for analysis of real drinking water samples. Matrix effects were negligible for the treated water samples with total organic carbon concentration of less than 2.9mg/L. An effective survey conducted by two drinking water treatment plants showed the highest proportion of Iodo-THMs, HANs, and HNMs occurred in treated water, and concentrations of 13 detected compounds ranged between the ng/L and the μg/L levels.
40 CFR 60.4385 - How are excess emissions and monitoring downtime defined for SO2?
Code of Federal Regulations, 2011 CFR
2011-07-01
... emissions and monitoring downtime are defined as follows: (a) For samples of gaseous fuel and for oil samples obtained using daily sampling, flow proportional sampling, or sampling from the unit's storage... demonstrates compliance with the sulfur limit. (b) If the option to sample each delivery of fuel oil has...
40 CFR 60.4385 - How are excess emissions and monitoring downtime defined for SO2?
Code of Federal Regulations, 2012 CFR
2012-07-01
... emissions and monitoring downtime are defined as follows: (a) For samples of gaseous fuel and for oil samples obtained using daily sampling, flow proportional sampling, or sampling from the unit's storage... demonstrates compliance with the sulfur limit. (b) If the option to sample each delivery of fuel oil has...
Defining meridians: a modern basis of understanding.
Longhurst, John C
2010-06-01
Acupuncture, one of the primary methods of treatment in traditional Oriental medicine, is based on a system of meridians. Along the meridians lie acupuncture points or acupoints, which are stimulated by needling, pressure or heat to resolve a clinical problem. A number of methods have been used to identify meridians and to explain them anatomically. Thus, tendinomuscular structures, primo-vessels (Bonghan ducts), regions of increased temperature and low skin resistance have been suggested to represent meridians or as methods to identify them. However, none of these methods have met the criteria for a meridian, an entity that, when stimulated by acupuncture can result in clinical improvement. More recently, modern physiologists have put forward the "neural hypothesis" stating that the clinical influence of acupuncture is transmitted primarily through stimulation of sensory nerves that provide signals to the brain, which processes this information and then causes clinical changes associated with treatment. Although additional research is warranted to investigate the role of some of the structures identified, it seems clear that the peripheral and central nervous system can now be considered to be the most rational basis for defining meridians. The meridian maps and associated acupoints located along them are best viewed as road maps that can guide practitioners towards applying acupuncture to achieve optimal clinical results.
Optimal Sampling Strategies for Oceanic Applications
2008-09-30
altimetry. If the errors assumed by the Bluelink Ocean Data Assimilation System ( BODAS ; Oke et al. 2008a) are correct, the fields in the top and bottom rows...Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. Oke, P. R., P. Sakov and E. Schulz, 2008b: A comparison of shelf observation platforms for...The Bluelink Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. [published, refereed] Oke, P. R., and P. Sakov 2008: Representation
Optimal Sector Sampling for Drive Triage
2013-06-01
known files, which we call target data, that could help identify a drive holding evidence such as child pornography or malware. Triage is needed to sift...we call target data, that could help identify a drive holding evidence such as child pornography or malware. Triage is needed to sift through drives...situations where the user is looking for known data.1 One example is a law enforcement officer searching for evidence of child pornography from a large num
Optimal Sampling Strategies for Oceanic Applications
2007-09-30
qualitative assessment of the impact of with-holding each data type in the Tasman Sea is shown in Figure 1. The circulation in the 2 Tasman Sea is...paths overlayed for each OSE (columns 2-6) in the Tasman Sea for mid-January (top), mid-February (middle) and mid-March (bottom) of 2006 (adapted from...model run, or from an ensemble of model runs; or time series of observations (e.g., gridded sea - level or sea surface temperature). An important
Optimal Sampling of a Chemical Hazard Area
2005-03-01
power generation and consumption , development of integrated biological and chemical detection systems, and fusing sensor data with mapping, imagery, and...Encyclopedia of Bioethics , 2545. 20. Wein, L., Craft, D., and E. Kaplan (2003). Emergency Response to an Anthrax Attack. National Academy of Sciences, 100, 4346
Optimal scaling in ductile fracture
NASA Astrophysics Data System (ADS)
Fokoua Djodom, Landry
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity
Shaaban, Heba; Górecki, Tadeusz
2012-01-01
High temperature in HPLC is considered a valuable tool helping to overcome the increase in the column backpressure when using small packing particles such as sub-2 μm, as it allows reduction in the mobile-phase viscosity. In this study, a fast analytical method based on HPLC-UV was developed using a sub-2 μm column at elevated temperature for the simultaneous determination of nine sulphonamides. Owing to the lower viscosity of the mobile phase, the separation could be achieved in 3 min at 60°C for all analytes. The effect of temperature, the organic modifier percentage and the flow rate on the retention time was studied. The method developed was used for the determination of selected sulphonamides in surface and wastewater samples. Sample preparation was carried out by solid-phase extraction on Oasis HLB cartridges. The method developed was validated based on the linearity, precision, accuracy, detection and quantification limits. The recovery ranged from 70.6 to 96 % with standard deviations not higher than 4.7%, except for sulphanilamide. Limits of detection ranged from 1 to 10 μg/L after optimization of all analytical steps. This method has the highest performance in terms of analytical speed compared with other published HPLC-UV methods for the determination of sulphonamides in water.
Optimality for set-valued optimization in the sense of vector and set criteria.
Kong, Xiangyu; Yu, GuoLin; Liu, Wei
2017-01-01
The vector criterion and set criterion are two defining approaches of solutions for the set-valued optimization problems. In this paper, the optimality conditions of both criteria of solutions are established for the set-valued optimization problems. By using Studniarski derivatives, the necessary and sufficient optimality conditions are derived in the sense of vector and set optimization.
... repeat the test with blood drawn from a vein. Alternative Names Blood sample - capillary; Fingerstick; Heelstick Images Phenylketonuria test Phenylketonuria test Capillary sample References Garza ...
DEFINED CONTRIBUTION PLANS, DEFINED BENEFIT PLANS, AND THE ACCUMULATION OF RETIREMENT WEALTH.
Poterba, James; Rauh, Joshua; Venti, Steven; Wise, David
2007-11-01
The private pension structure in the United States, once dominated by defined benefit (DB) plans, is currently divided between defined contribution (DC) and DB plans. Wealth accumulation in DC plans depends on the participant's contribution behavior and on financial market returns, while accumulation in DB plans is sensitive to a participant's labor market experience and to plan parameters. This paper simulates the distribution of retirement wealth under representative DB and DC plans. It uses data from the Health and Retirement Study (HRS) to explore how asset returns, earnings histories, and retirement plan characteristics contribute to the variation in retirement wealth outcomes. We simulate DC plan accumulation by randomly assigning individuals a share of wages that they and their employer contribute to the plan. We consider several possible asset allocation strategies, with asset returns drawn from the historical return distribution. Our DB plan simulations draw earnings histories from the HRS, and randomly assign each individual a pension plan drawn from a sample of large private and public defined benefit plans. The simulations yield distributions of both DC and DB wealth at retirement. Average retirement wealth accruals under current DC plans exceed average accruals under private sector DB plans, although DC plans are also more likely to generate very low retirement wealth outcomes. The comparison of current DC plans with more generous public sector DB plans is less definitive, because public sector DB plans are more generous on average than their private sector counterparts.
Eutectic superalloys by edge-defined, film-fed growth
NASA Technical Reports Server (NTRS)
Hurley, G. F.
1975-01-01
The feasibility of producing directionally solidified eutectic alloy composites by edge-defined, film-fed growth (EFG) was carried out. The three eutectic alloys which were investigated were gamma + delta, gamma/gamma prime + delta, and a Co-base TaC alloy containing Cr and Ni. Investigations into the compatibility and wettability of these metals with various carbides, borides, nitrides, and oxides disclosed that compounds with the largest (negative) heats of formation were most stable but poorest wetting. Nitrides and carbides had suitable stability and low contact angles but capillary rise was observed only with carbides. Oxides would not give capillary rise but would probably fulfill the other wetting requirements of EFG. Tantalum carbide was selected for most of the experimental portion of the program based on its exhibiting spontaneous capillary rise and satisfactory slow rate of degradation in the liquid metals. Samples of all three alloys were grown by EFG with the major experimental effort restricted to gamma + delta and gamma/gamma prime + delta alloys. In the standard, uncooled EFG apparatus, the thermal gradient was inferred from the growth speed and was 150 to 200 C/cm. This value may be compared to typical gradients of less than 100 C/cm normally achieved in a standard Bridgman-type apparatus. When a stream of helium was directed against the side of the bar during growth, the gradient was found to improve to about 250 C/cm. In comparison, a theoretical gradient of 700 C/cm should be possible under ideal conditions, without the use of chills. Methods for optimizing the gradient in EFG are discussed, and should allow attainment of close to the theoretical for a particular configuration.
Aircraft configuration optimization including optimized flight profiles
NASA Technical Reports Server (NTRS)
Mccullers, L. A.
1984-01-01
The Flight Optimization System (FLOPS) is an aircraft configuration optimization program developed for use in conceptual design of new aircraft and in the assessment of the impact of advanced technology. The modular makeup of the program is illustrated. It contains modules for preliminary weights estimation, preliminary aerodynamics, detailed mission performance, takeoff and landing, and execution control. An optimization module is used to drive the overall design and in defining optimized profiles in the mission performance. Propulsion data, usually received from engine manufacturers, are used in both the mission performance and the takeoff and landing analyses. Although executed as a single in-core program, the modules are stored separately so that the user may select the appropriate modules (e.g., fighter weights versus transport weights) or leave out modules that are not needed.
Saha, Krishanu; Mei, Ying; Reisterer, Colin M; Pyzocha, Neena Kenton; Yang, Jing; Muffat, Julien; Davies, Martyn C; Alexander, Morgan R; Langer, Robert; Anderson, Daniel G; Jaenisch, Rudolf
2011-11-15
The current gold standard for the culture of human pluripotent stem cells requires the use of a feeder layer of cells. Here, we develop a spatially defined culture system based on UV/ozone radiation modification of typical cell culture plastics to define a favorable surface environment for human pluripotent stem cell culture. Chemical and geometrical optimization of the surfaces enables control of early cell aggregation from fully dissociated cells, as predicted from a numerical model of cell migration, and results in significant increases in cell growth of undifferentiated cells. These chemically defined xeno-free substrates generate more than three times the number of cells than feeder-containing substrates per surface area. Further, reprogramming and typical gene-targeting protocols can be readily performed on these engineered surfaces. These substrates provide an attractive cell culture platform for the production of clinically relevant factor-free reprogrammed cells from patient tissue samples and facilitate the definition of standardized scale-up friendly methods for disease modeling and cell therapeutic applications.
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Respiratory motion sampling in 4DCT reconstruction for radiotherapy
Chi Yuwei; Liang Jian; Qin Xu; Yan Di
2012-04-15
Purpose: Phase-based and amplitude-based sorting techniques are commonly used in four-dimensional CT (4DCT) reconstruction. However, effect of these sorting techniques on 4D dose calculation has not been explored. In this study, the authors investigated a candidate 4DCT sorting technique by comparing its 4D dose calculation accuracy with that for phase-based and amplitude-based sorting techniques.Method: An optimization model was formed using organ motion probability density function (PDF) in the 4D dose convolution. The objective function for optimization was defined as the maximum difference between the expected 4D dose in organ of interest and the 4D dose calculated using a 4DCT sorted by a candidate sampling method. Sorting samples, as optimization variables, were selected on the respiratory motion PDF assessed during the CT scanning. Breathing curves obtained from patients' 4DCT scanning, as well as 3D dose distribution from treatment planning, were used in the study. Given the objective function, a residual error analysis was performed, and k-means clustering was found to be an effective sampling scheme to improve the 4D dose calculation accuracy and independent with the patient-specific dose distribution. Results: Patient data analysis demonstrated that the k-means sampling was superior to the conventional phase-based and amplitude-based sorting and comparable to the optimal sampling results. For phase-based sorting, the residual error in 4D dose calculations may not be further reduced to an acceptable accuracy after a certain number of phases, while for amplitude-based sorting, k-means sampling, and the optimal sampling, the residual error in 4D dose calculations decreased rapidly as the number of 4DCT phases increased to 6.Conclusion: An innovative phase sorting method (k-means method) is presented in this study. The method is dependent only on tumor motion PDF. It could provide a way to refine the phase sorting in 4DCT reconstruction and is effective for
Multiobjective Collaborative Optimization of Systems of Systems
2005-06-01
field of economics where the best decision simultaneously optimizes several criteria. An economist, Vilfredo Pareto , in 1906 described the best...represent the Pareto -optimal set, named after Vilfredo Pareto . The Pareto - optimal set also defines a curve, called the Pareto -Optimal Frontier (POF...67 FuzzY PARETO FRONTS
Allen, P.V.; Nimberger, M.; Ward, R.L.
1991-12-24
This patent describes a fluid sampling pump for withdrawing pressurized sample fluid from a flow line and for pumping a preselected quantity of sample fluid with each pump driving stroke from the pump to a sample vessel, the sampling pump including a pump body defining a pump bore therein having a central axis, a piston slideably moveable within the pump bore and having a fluid inlet end and an opposing operator end, a fluid sample inlet port open to sample fluid in the flow line, a fluid sample outlet port for transmitting fluid from the pump bore to the sample vessel, and a line pressure port in fluid pressure sample fluid in the flow line, an inlet valve for selectively controlling sample fluid flow from the flow line through the fluid sample inlet port, an operator unit for periodically reciprocating the piston within the pump bore, and a controller for regulating the stroke of the piston within the pump bore, and thereby the quantity of fluid pumped with each pump driving stroke. It comprises a balanced check valve seat; a balanced check valve seal; a compression member; and a central plunger.
Defining moments in leadership character development.
Bleich, Michael R
2015-06-01
Critical moments in life define one's character and clarify true values. Reflective leadership is espoused as an important practice for transformational leaders. Professional development educators can help surface and explore defining moments, strengthen leadership behavior with defining moments as a catalyst for change, and create safe spaces for leaders to expand their leadership capacity.
Avron, J E; Elgart, A; Graf, G M; Sadun, L
2001-12-03
We study adiabatic quantum pumps on time scales that are short relative to the cycle of the pump. In this regime the pump is characterized by the matrix of energy shift which we introduce as the dual to Wigner's time delay. The energy shift determines the charge transport, the dissipation, the noise, and the entropy production. We prove a general lower bound on dissipation in a quantum channel and define optimal pumps as those that saturate the bound. We give a geometric characterization of optimal pumps and show that they are noiseless and transport integral charge in a cycle. Finally we discuss an example of an optimal pump related to the Hall effect.
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
Ramsey, Charles A; Wagner, Claas
2015-01-01
The concept of Sample Quality Criteria (SQC) is the initial step in the scientific approach to representative sampling. It includes the establishment of sampling objectives, Decision Unit (DU), and confidence. Once fully defined, these criteria serve as input, in addition to material properties, to the Theory of Sampling for developing a representative sampling protocol. The first component of the SQC establishes these questions: What is the analyte(s) of concern? What is the concentration level of interest of the analyte(s)? How will inference(s) be made from the analytical data to the DU? The second component of the SQC establishes the DU, i.e., the scale at which decisions are to be made. On a large scale, a DU could be a ship or rail car; examples for small-scale DUs are individual beans, seeds, or kernels. A well-defined DU is critical because it defines the spatial and temporal boundaries of sample collection. SQC are not limited to a single DU; they can also include multiple DUs. The third SQC component, the confidence, establishes the desired probability that a correct inference (decision) can be made. The confidence level should typically correlate to the potential consequences of an incorrect decision (e.g., health or economic). The magnitude of combined errors in the sampling, sample processing and analytical protocols determines the likelihood of an incorrect decision. Thus, controlling error to a greater extent increases the probability of a correct decision. The required confidence level directly affects the sampling effort and QC measures.
Hierarchical Bayesian Modeling, Estimation, and Sampling for Multigroup Shape Analysis
Yu, Yen-Yun; Fletcher, P. Thomas; Awate, Suyash P.
2016-01-01
This paper proposes a novel method for the analysis of anatomical shapes present in biomedical image data. Motivated by the natural organization of population data into multiple groups, this paper presents a novel hierarchical generative statistical model on shapes. The proposed method represents shapes using pointsets and defines a joint distribution on the population’s (i) shape variables and (ii) object-boundary data. The proposed method solves for optimal (i) point locations, (ii) correspondences, and (iii) model-parameter values as a single optimization problem. The optimization uses expectation maximization relying on a novel Markov-chain Monte-Carlo algorithm for sampling in Kendall shape space. Results on clinical brain images demonstrate advantages over the state of the art. PMID:25320776
A Mars Sample Return Sample Handling System
NASA Technical Reports Server (NTRS)
Wilson, David; Stroker, Carol
2013-01-01
We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory
Optimality Functions and Lopsided Convergence
2015-03-16
Problems involving functions defined in terms of integrals or optimization problems (as the maxi - mization in Example 3), functions defined on infinite...optimization methods in finite time. The key technical challenge associate with the above scheme is to establish ( weak ) consistency. In the next...Theorem 4.3. In view of this result, it is clear that ( weak ) consistency will be ensured by epi-convergence of the approximating objective functions and
... parts of the body, including: Adrenal venous sampling (AVS) , in which blood samples are taken from the ... for a few days before the procedure. For AVS, you will be asked to stop taking certain ...
Oscillator metrology with software defined radio.
Sherman, Jeff A; Jördens, Robert
2016-05-01
Analog electrical elements such as mixers, filters, transfer oscillators, isolating buffers, dividers, and even transmission lines contribute technical noise and unwanted environmental coupling in time and frequency measurements. Software defined radio (SDR) techniques replace many of these analog components with digital signal processing (DSP) on rapidly sampled signals. We demonstrate that, generically, commercially available multi-channel SDRs are capable of time and frequency metrology, outperforming purpose-built devices by as much as an order-of-magnitude. For example, for signals at 10 MHz and 6 GHz, we observe SDR time deviation noise floors of about 20 fs and 1 fs, respectively, in under 10 ms of averaging. Examining the other complex signal component, we find a relative amplitude measurement instability of 3 × 10(-7) at 5 MHz. We discuss the scalability of a SDR-based system for simultaneous measurement of many clocks. SDR's frequency agility allows for comparison of oscillators at widely different frequencies. We demonstrate a novel and extreme example with optical clock frequencies differing by many terahertz: using a femtosecond-laser frequency comb and SDR, we show femtosecond-level time comparisons of ultra-stable lasers with zero measurement dead-time.
NASA Astrophysics Data System (ADS)
Gorbatenko, A. A.; Revina, E. I.
2015-10-01
The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references.
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.
1999-01-01
A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.
Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.
1998-07-07
A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.
Lunar Sample Quarantine & Sample Curation
NASA Technical Reports Server (NTRS)
Allton, Judith H.
2000-01-01
The main goal of this presentation is to discuss some of the responsibility of the lunar sample quarantine project. The responsibilities are: flying the mission safely, and on schedule, protect the Earth from biohazard, and preserve scientific integrity of samples.
Multidisciplinary Optimization Methods for Preliminary Design
NASA Technical Reports Server (NTRS)
Korte, J. J.; Weston, R. P.; Zang, T. A.
1997-01-01
An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.
Optimal Inputs for System Identification.
1995-09-01
The derivation of the power spectral density of the optimal input for system identification is addressed in this research. Optimality is defined in...identification potential of general System Identification algorithms, a new and efficient System Identification algorithm that employs Iterated Weighted Least
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of the enterprise. This article discusses how to sample development in order to accurately discern the shape of developmental change. The ideal solution is daunting: to summarize behavior over 24-hour intervals and collect daily samples over the critical periods of change. We discuss the magnitude of errors due to undersampling, and the risks associated with oversampling. When daily sampling is not feasible, we offer suggestions for sampling methods that can provide preliminary reference points and provisional sketches of the general shape of a developmental trajectory. Denser sampling then can be applied strategically during periods of enhanced variability, inflections in the rate of developmental change, or in relation to key events or processes that may affect the course of change. Despite the challenges of dense repeated sampling, researchers must take seriously the problem of sampling on a developmental time scale if we are to know the true shape of developmental change. PMID:22140355
Taking Stock of Unrealistic Optimism.
Shepperd, James A; Klein, William M P; Waters, Erika A; Weinstein, Neil D
2013-07-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type-the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention.
Taking Stock of Unrealistic Optimism
Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.
2015-01-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of