Science.gov

Sample records for adaptive sampling approach

  1. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    SciTech Connect

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    2015-02-25

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a sample of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching

  2. Digital adaptive sampling.

    NASA Technical Reports Server (NTRS)

    Breazeale, G. J.; Jones, L. E.

    1971-01-01

    Discussion of digital adaptive sampling, which is consistently better than fixed sampling in noise-free cases. Adaptive sampling is shown to be feasible and, it is considered, should be studied further. It should be noted that adaptive sampling is a class of variable rate sampling in which the variability depends on system signals. Digital rather than analog laws should be studied, because cases can arise in which the analog signals are not even available. An extremely important problem is implementation.

  3. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  4. Adaptive Sampling Proxy Application

    2012-10-22

    ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.

  5. Adaptive sampling for noisy problems

    SciTech Connect

    Cantu-Paz, E

    2004-03-26

    The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.

  6. A Predictive Approach to Nonparametric Inference for Adaptive Sequential Sampling of Psychophysical Experiments

    PubMed Central

    Benner, Philipp; Elze, Tobias

    2012-01-01

    We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269

  7. Adaptive Sampling approach to environmental site characterization at Joliet Army Ammunition Plant: Phase 2 demonstration

    SciTech Connect

    Bujewski, G.E.; Johnson, R.L.

    1996-04-01

    Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Development Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.

  8. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  9. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  10. Adaptive Sampling in Hierarchical Simulation

    SciTech Connect

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  11. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  12. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈50×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  13. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  14. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  15. Adaptive Sampling for High Throughput Data Using Similarity Measures

    SciTech Connect

    Bulaevskaya, V.; Sales, A. P.

    2015-05-06

    The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.

  16. Adaptive importance sampling for network growth models

    PubMed Central

    Holmes, Susan P.

    2016-01-01

    Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098

  17. Feature Adaptive Sampling for Scanning Electron Microscopy

    PubMed Central

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  18. Understanding the adaptive approach to thermal comfort

    SciTech Connect

    Humphreys, M.A.; Nicol, J.F.

    1998-10-01

    This paper explains the adaptive approach to thermal comfort, and an adaptive model for thermal comfort is presented. The model is an example of a complex adaptive system (Casti 1996) whose equilibria are determined by the restrictions acting upon it. People`s adaptive actions are generally effective in securing comfort, which occurs at a wide variety of indoor temperatures. These comfort temperatures depend upon the circumstances in which people live, such as the climate and the heating or cooling regime. The temperatures may be estimated from the mean outdoor temperature and the availability of a heating or cooling plant. The evaluation of the parameters of the adaptive model requires cross-sectional surveys to establish current norms and sequential surveys (with and without intervention) to evaluate the rapidity of people`s adaptive actions. Standards for thermal comfort will need revision in the light of the adaptive approach. Implications of the adaptive model for the HVAC industry are noted.

  19. An adaptive signal-processing approach to online adaptive tutoring.

    PubMed

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  20. Acquiring case adaptation knowledge: A hybrid approach

    SciTech Connect

    Leake, D.B.; Kinley, A.; Wilson, D.

    1996-12-31

    The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance.

  1. Phobos Sample Return: Next Approach

    NASA Astrophysics Data System (ADS)

    Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George

    The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A

  2. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  3. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  4. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  5. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  6. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  7. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGES

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  8. Distributed database kriging for adaptive sampling (D²KAS)

    SciTech Connect

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.

  9. Distributed Database Kriging for Adaptive Sampling (D2 KAS)

    NASA Astrophysics Data System (ADS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-07-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.

  10. Brain source localization based on fast fully adaptive approach.

    PubMed

    Ravan, Maryam; Reilly, James P

    2012-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization (beamforming) methods often fail when the number of observations is small. This is particularly true when measuring evoked potentials, especially when the number of electrodes is large. Due to the nonstationarity of the EEG/MEG, an adaptive capability is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This paper develops and tests a new multistage adaptive processing for brain source localization that has been previously used for radar statistical signal processing application with uniform linear antenna array. This processing, referred to as the fast fully adaptive (FFA) approach, could significantly reduce the required sample support and computational complexity, while still processing all available DoFs. The performance improvement offered by the FFA approach in comparison to the fully adaptive minimum variance beamforming (MVB) with limited data is demonstrated by bootstrapping simulated data to evaluate the variability of the source location.

  11. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  12. Chaotic satellite attitude control by adaptive approach

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Wang, Jing; Zuo, Min; Liu, Zaiwen; Du, Junping

    2014-06-01

    In this article, chaos control of satellite attitude motion is considered. Adaptive control based on dynamic compensation is utilised to suppress the chaotic behaviour. Control approaches with three control inputs and with only one control input are proposed. Since the adaptive control employed is based on dynamic compensation, faithful model of the system is of no necessity. Sinusoidal disturbance and parameter uncertainties are considered to evaluate the robustness of the closed-loop system. Both of the approaches are confirmed by theoretical and numerical results.

  13. Adaptive Metropolis Sampling with Product Distributions

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  14. A modular approach to adaptive structures.

    PubMed

    Pagitz, Markus; Pagitz, Manuel; Hühne, Christian

    2014-01-01

    A remarkable property of nastic, shape changing plants is their complete fusion between actuators and structure. This is achieved by combining a large number of cells whose geometry, internal pressures and material properties are optimized for a given set of target shapes and stiffness requirements. An advantage of such a fusion is that cell walls are prestressed by cell pressures which increases, decreases the overall structural stiffness, weight. Inspired by the nastic movement of plants, Pagitz et al (2012 Bioinspir. Biomim. 7) published a novel concept for pressure actuated cellular structures. This article extends previous work by introducing a modular approach to adaptive structures. An algorithm that breaks down any continuous target shapes into a small number of standardized modules is presented. Furthermore it is shown how cytoskeletons within each cell enhance the properties of adaptive modules. An adaptive passenger seat and an aircrafts leading, trailing edge is used to demonstrate the potential of a modular approach. PMID:25289521

  15. Cross-Cultural Adaptation: Current Approaches.

    ERIC Educational Resources Information Center

    Kim, Young Yun, Ed.; Gudykunst, William B., Ed.

    1988-01-01

    Reflecting multidisciplinary and multisocietal approaches, this collection presents 14 theoretical or research-based essays dealing with cross-cultural adaptation of individuals who are born and raised in one culture and find themselves in need of modifying their customary life patterns in a foreign culture. Papers in the collection are:…

  16. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  17. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  18. Irregular and adaptive sampling for automatic geophysic measure systems

    NASA Astrophysics Data System (ADS)

    Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo

    2000-07-01

    In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.

  19. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  20. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-08-31

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.

  1. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks

    PubMed Central

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

  2. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

  3. Adaptive importance sampling of random walks on continuous state spaces

    SciTech Connect

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  4. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  5. Approaching neuropsychological tasks through adaptive neurorobots

    NASA Astrophysics Data System (ADS)

    Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio

    2015-04-01

    Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.

  6. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  7. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  8. Local Adaptation in European Firs Assessed through Extensive Sampling across Altitudinal Gradients in Southern Europe

    PubMed Central

    Postolache, Dragos; Lascoux, Martin; Drouzas, Andreas D.; Källman, Thomas; Leonarduzzi, Cristina; Liepelt, Sascha; Piotti, Andrea; Popescu, Flaviu; Roschanski, Anna M.; Zhelev, Peter; Fady, Bruno; Vendramin, Giovanni Giuseppe

    2016-01-01

    Background Local adaptation is a key driver of phenotypic and genetic divergence at loci responsible for adaptive traits variations in forest tree populations. Its experimental assessment requires rigorous sampling strategies such as those involving population pairs replicated across broad spatial scales. Methods A hierarchical Bayesian model of selection (HBM) that explicitly considers both the replication of the environmental contrast and the hierarchical genetic structure among replicated study sites is introduced. Its power was assessed through simulations and compared to classical ‘within-site’ approaches (FDIST, BAYESCAN) and a simplified, within-site, version of the model introduced here (SBM). Results HBM demonstrates that hierarchical approaches are very powerful to detect replicated patterns of adaptive divergence with low false-discovery (FDR) and false-non-discovery (FNR) rates compared to the analysis of different sites separately through within-site approaches. The hypothesis of local adaptation to altitude was further addressed by analyzing replicated Abies alba population pairs (low and high elevations) across the species’ southern distribution range, where the effects of climatic selection are expected to be the strongest. For comparison, a single population pair from the closely related species A. cephalonica was also analyzed. The hierarchical model did not detect any pattern of adaptive divergence to altitude replicated in the different study sites. Instead, idiosyncratic patterns of local adaptation among sites were detected by within-site approaches. Conclusion Hierarchical approaches may miss idiosyncratic patterns of adaptation among sites, and we strongly recommend the use of both hierarchical (multi-site) and classical (within-site) approaches when addressing the question of adaptation across broad spatial scales. PMID:27392065

  9. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  10. Stochastic, Adaptive Sampling of Information by Microvilli in Fly Photoreceptors

    PubMed Central

    Song, Zhuoyi; Postma, Marten; Billings, Stephen A.; Coca, Daniel; Hardie, Roger C.; Juusola, Mikko

    2012-01-01

    Summary Background In fly photoreceptors, light is focused onto a photosensitive waveguide, the rhabdomere, consisting of tens of thousands of microvilli. Each microvillus is capable of generating elementary responses, quantum bumps, in response to single photons using a stochastically operating phototransduction cascade. Whereas much is known about the cascade reactions, less is known about how the concerted action of the microvilli population encodes light changes into neural information and how the ultrastructure and biochemical machinery of photoreceptors of flies and other insects evolved in relation to the information sampling and processing they perform. Results We generated biophysically realistic fly photoreceptor models, which accurately simulate the encoding of visual information. By comparing stochastic simulations with single cell recordings from Drosophila photoreceptors, we show how adaptive sampling by 30,000 microvilli captures the temporal structure of natural contrast changes. Following each bump, individual microvilli are rendered briefly (∼100–200 ms) refractory, thereby reducing quantum efficiency with increasing intensity. The refractory period opposes saturation, dynamically and stochastically adjusting availability of microvilli (bump production rate: sample rate), whereas intracellular calcium and voltage adapt bump amplitude and waveform (sample size). These adapting sampling principles result in robust encoding of natural light changes, which both approximates perceptual contrast constancy and enhances novel events under different light conditions, and predict information processing across a range of species with different visual ecologies. Conclusions These results clarify why fly photoreceptors are structured the way they are and function as they do, linking sensory information to sensory evolution and revealing benefits of stochasticity for neural information processing. PMID:22704990

  11. Anomalous human behavior detection: an adaptive approach

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Coen; Halma, Arvid; Schutte, Klamer

    2013-05-01

    Detection of anomalies (outliers or abnormal instances) is an important element in a range of applications such as fault, fraud, suspicious behavior detection and knowledge discovery. In this article we propose a new method for anomaly detection and performed tested its ability to detect anomalous behavior in videos from DARPA's Mind's Eye program, containing a variety of human activities. In this semi-unsupervised task a set of normal instances is provided for training, after which unknown abnormal behavior has to be detected in a test set. The features extracted from the video data have high dimensionality, are sparse and inhomogeneously distributed in the feature space making it a challenging task. Given these characteristics a distance-based method is preferred, but choosing a threshold to classify instances as (ab)normal is non-trivial. Our novel aproach, the Adaptive Outlier Distance (AOD) is able to detect outliers in these conditions based on local distance ratios. The underlying assumption is that the local maximum distance between labeled examples is a good indicator of the variation in that neighborhood, and therefore a local threshold will result in more robust outlier detection. We compare our method to existing state-of-art methods such as the Local Outlier Factor (LOF) and the Local Distance-based Outlier Factor (LDOF). The results of the experiments show that our novel approach improves the quality of the anomaly detection.

  12. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  13. On efficient two-stage adaptive designs for clinical trials with sample size adjustment.

    PubMed

    Liu, Qing; Li, Gang; Anderson, Keaven M; Lim, Pilar

    2012-01-01

    Group sequential designs are rarely used for clinical trials with substantial over running due to fast enrollment or long duration of treatment and follow-up. Traditionally, such trials rely on fixed sample size designs. Recently, various two-stage adaptive designs have been introduced to allow sample size adjustment to increase statistical power or avoid unnecessarily large trials. However, these adaptive designs can be seriously inefficient. To address this infamous problem, we propose a likelihood-based two-stage adaptive design where sample size adjustment is derived from a pseudo group sequential design using cumulative conditional power. We show through numerical examples that this design cannot be improved by group sequential designs. In addition, the approach may uniformly improve any existing two-stage adaptive designs with sample size adjustment. For statistical inference, we provide methods for sequential p-values and confidence intervals, as well as median unbiased and minimum variance unbiased estimates. We show that the claim of inefficiency of adaptive designs by Tsiatis and Mehta ( 2003 ) is logically flawed, and thereby provide a strong defense of Cui et al. ( 1999 ). PMID:22651105

  14. Combined phylogenetic and genomic approaches for the high-throughput study of microbial habitat adaptation

    PubMed Central

    Zaneveld, Jesse RR.; Parfrey, Laura Wegener; Van Treuren, Will; Lozupone, Catherine; Clemente, Jose C.; Knights, Dan; Stombaugh, Jesse; Kuczynski, Justin; Knight, Rob

    2011-01-01

    High-throughput sequencing technologies provide new opportunities to address longstanding questions about habitat adaptation in microbial organisms. How have microbes managed to adapt to such a wide range of environments, and what genomic features allow for such adaptation? We review recent large-scale studies of habitat adaptation, with emphasis on those that utilize phylogenetic techniques. On the basis of current trends, we summarize methodological challenges faced by investigators, and the tools, techniques, and analytical approaches available to overcome them. Phylogenetic approaches and detailed information about each environmental sample will be critical as the ability to collect genome sequences continues to expand. PMID:21872475

  15. Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling

    NASA Astrophysics Data System (ADS)

    Grace, J. M.; Verseux, C.; Gentry, D.; Moffet, A.; Thayabaran, R.; Wong, N.; Rothschild, L.

    2013-12-01

    The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates[Wielgoss et al., 2013]. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques[Wassmann et al., 2010]. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols[Alcántara-Díaz et al., 2004; Goldman and Travisano, 2011]. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of

  16. Effect of imperfect detectability on adaptive and conventional sampling: simulated sampling of freshwater mussels in the upper Mississippi River.

    PubMed

    Smith, David R; Gray, Brian R; Newton, Teresa J; Nichols, Doug

    2010-11-01

    Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.

  17. Temporally Adaptive Sampling: A Case Study in Rare Species Survey Design with Marbled Salamanders (Ambystoma opacum)

    PubMed Central

    Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  18. Temporally adaptive sampling: a case study in rare species survey design with marbled salamanders (Ambystoma opacum).

    PubMed

    Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  19. Temporally adaptive sampling: a case study in rare species survey design with marbled salamanders (Ambystoma opacum).

    PubMed

    Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds.

  20. Using continuous in-situ measurements to adaptively trigger urban storm water samples

    NASA Astrophysics Data System (ADS)

    Wong, B. P.; Kerkez, B.

    2015-12-01

    Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.

  1. POF-Darts: Geometric adaptive sampling for probability of failure

    DOE PAGES

    Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.

    2016-06-18

    We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less

  2. Passive and active adaptive management: Approaches and an example

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.

  3. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  4. [Spanish adaptation of Hobfoll's Strategic Approach to Coping Scale (SACS)].

    PubMed

    Pedrero Pérez, Eduardo J; Santed Germán, Miguel A; Pérez García, Ana M

    2012-01-01

    The present research adapted the Strategic Approach to Coping Scale (SACS), developed by Hobfoll and colleagues, to the Spanish population. SACS is an instrument derived from Hobfoll's Conservation of Resources Theory, which emphasises the contribution of social factors to coping processes. This instrument assesses coping strategies in 9-subscales, organised in three dimensions: orientation to the problem (active/passive), use of social resources (prosocial/antisocial), and orientation to others involved (direct/indirect). The Spanish version, administered to a non-clinical sample (N= 767), found 7-subscales structured in prosocial/antisocial, active/passive and reflexive/intuitive dimensions, with adequate reliability and construct validity. To conclude, the Spanish SACS is a potentially useful and reliable instrument for research and clinical purposes, mainly in areas in which social components need to be explicitly considered.

  5. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  6. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  7. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  8. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  9. [An adapted relational approach to hospitalised adolescents].

    PubMed

    Naville, Lydie

    2011-01-01

    The treatment of an adolescent hospitalised in paediatrics often poses difficulties. The relational aspect of the nurse's work in this period of development between childhood and adulthood remains delicate in a context of institutionalisation. What is the interaction between the relational approach and the adolescent's experience of hospitalisation in paediatrics?

  10. [An adapted relational approach to hospitalised adolescents].

    PubMed

    Naville, Lydie

    2011-01-01

    The treatment of an adolescent hospitalised in paediatrics often poses difficulties. The relational aspect of the nurse's work in this period of development between childhood and adulthood remains delicate in a context of institutionalisation. What is the interaction between the relational approach and the adolescent's experience of hospitalisation in paediatrics? PMID:21520581

  11. An efficient sampling algorithm with adaptations for Bayesian variable selection.

    PubMed

    Araki, Takamitsu; Ikeda, Kazushi; Akaho, Shotaro

    2015-01-01

    In Bayesian variable selection, indicator model selection (IMS) is a class of well-known sampling algorithms, which has been used in various models. The IMS is a class of methods that uses pseudo-priors and it contains specific methods such as Gibbs variable selection (GVS) and Kuo and Mallick's (KM) method. However, the efficiency of the IMS strongly depends on the parameters of a proposal distribution and the pseudo-priors. Specifically, the GVS determines their parameters based on a pilot run for a full model and the KM method sets their parameters as those of priors, which often leads to slow mixings of them. In this paper, we propose an algorithm that adapts the parameters of the IMS during running. The parameters obtained on the fly provide an appropriate proposal distribution and pseudo-priors, which improve the mixing of the algorithm. We also prove the convergence theorem of the proposed algorithm, and confirm that the algorithm is more efficient than the conventional algorithms by experiments of the Bayesian variable selection.

  12. Staged sacrectomy--an adaptive approach.

    PubMed

    Ramamurthy, Rajaraman; Bose, Jagadish Chandra; Muthusamy, Vimalakannan; Natarajan, Mayilvahanan; Kunjithapatham, Deiveegan

    2009-09-01

    Object Sacral tumors are commonly diagnosed late and therefore present at an advanced stage. The late presentation makes curative surgery technically demanding. Sacrectomy is fraught with a high local recurrence rate and potential complications: deep infection; substantial blood loss; large-bone and soft-tissue defects; bladder, bowel, and sexual dysfunction; spinopelvic nonunion; and gait disturbance. The aim of this study was to analyze the complications and morbidity of sacrectomy and the modifications meant to reduce the morbidity. Methods This is a retrospective study of the patients who underwent sacrectomy between February 1997 and September 2008 in the Department of Surgical Oncology, Government Royapettah Hospital, Kilpauk Medical College, in Chennai, Tamilnadu, India. Sacrectomy was performed using 1 of the following approaches: posterior approach, abdominolateral approach, or abdominosacral approach, either as sequential or staged operations. The morbidity rate after the sequential and staged abdominosacral approaches was analyzed. Functional assessment was made based on the Enneking functional scoring system. The results were analyzed and survival analysis was done using the Kaplan-Meier method (with SPSS software). Results Nineteen patients underwent sacrectomy, of which 12 operations were partial, 3 were subtotal, and 4 were total sacrectomy. Histological diagnosis included giant cell tumor, chordoma, chondroblastoma, adenocarcinoma of rectum, and retroperitoneal sarcoma. The giant cell tumor was the most common tumor in this series, followed by chordoma. The patients' mean age at diagnosis was 32 years. There were 10 male and 9 female patients. Fortyseven percent of patients had bowel and bladder disturbances postoperatively, and 57.89% of patients had wound complications. The median follow-up duration was 24 months (range 2-140 months). The 5-year overall survival rate was 70.4%, and the 5-year disease-free survival rate was 65% (based on the Kaplan

  13. Adapting to the Digital Age: A Narrative Approach

    ERIC Educational Resources Information Center

    Cousins, Sarah; Bissar, Dounia

    2012-01-01

    The article adopts a narrative inquiry approach to foreground informal learning and exposes a collection of stories from tutors about how they adapted comfortably to the digital age.We were concerned that despite substantial evidence that bringing about changes in pedagogic practices can be difficult, there is a gap in convincing approaches to…

  14. Adaptive pulse width control and sampling for low power pulse oximetry.

    PubMed

    Gubbi, Sagar Venkatesh; Amrutur, Bharadwaj

    2015-04-01

    Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry. PMID:25014964

  15. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-01

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made.

  16. The AdaptiV Approach to Verification of Adaptive Systems

    SciTech Connect

    Rouff, Christopher; Buskens, Richard; Pullum, Laura L; Cui, Xiaohui; Hinchey, Mike

    2012-01-01

    Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.

  17. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  18. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    NASA Astrophysics Data System (ADS)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  19. A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2008-01-01

    This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…

  20. A new approach to adaptive control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    An approach in which the manipulator inverse is used as a feedforward controller is employed in the adaptive control of manipulators in order to achieve trajectory tracking by the joint angles. The desired trajectory is applied as an input to the feedforward controller, and the controller output is used as the driving torque for the manipulator. An adaptive algorithm obtained from MRAC theory is used to update the controller gains to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal enhance closed-loop stability and achieve faster adaptation. Simulation results demonstrate the effectiveness of the proposed control scheme for different reference trajectories, and despite large variations in the payload.

  1. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  2. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  3. Searching for adaptive traits in genetic resources - phenology based approach

    NASA Astrophysics Data System (ADS)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  4. Approach for environmental baseline water sampling

    USGS Publications Warehouse

    Smith, K.S.

    2011-01-01

    Samples collected during the exploration phase of mining represent baseline conditions at the site. As such, they can be very important in forecasting potential environmental impacts should mining proceed, and can become measurements against which future changes are compared. Constituents in stream water draining mined and mineralized areas tend to be geochemically, spatially, and temporally variable, which presents challenges in collecting both exploration and baseline water-quality samples. Because short-term (daily) variations can complicate long-term trends, it is important to consider recent findings concerning geochemical variability of stream-water constituents at short-term timescales in designing sampling plans. Also, adequate water-quality information is key to forecasting potential ecological impacts from mining. Therefore, it is useful to collect baseline water samples adequate tor geochemical and toxicological modeling. This requires complete chemical analyses of dissolved constituents that include major and minor chemical elements as well as physicochemical properties (including pH, specific conductance, dissolved oxygen) and dissolved organic carbon. Applying chemical-equilibrium and appropriate toxicological models to water-quality information leads to an understanding of the speciation, transport, sequestration, bioavailability, and aquatic toxicity of potential contaminants. Insights gained from geochemical and toxicological modeling of water-quality data can be used to design appropriate mitigation and for economic planning for future mining activities.

  5. Motion-adapted pulse sequences for oriented sample (OS) solid-state NMR of biopolymers.

    PubMed

    Lu, George J; Opella, Stanley J

    2013-08-28

    One of the main applications of solid-state NMR is to study the structure and dynamics of biopolymers, such as membrane proteins, under physiological conditions where the polypeptides undergo global motions as they do in biological membranes. The effects of NMR radiofrequency irradiations on nuclear spins are strongly influenced by these motions. For example, we previously showed that the MSHOT-Pi4 pulse sequence yields spectra with resonance line widths about half of those observed using the conventional pulse sequence when applied to membrane proteins undergoing rapid uniaxial rotational diffusion in phospholipid bilayers. In contrast, the line widths were not changed in microcrystalline samples where the molecules did not undergo global motions. Here, we demonstrate experimentally and describe analytically how some Hamiltonian terms are susceptible to sample motions, and it is their removal through the critical π/2 Z-rotational symmetry that confers the "motion adapted" property to the MSHOT-Pi4 pulse sequence. This leads to the design of separated local field pulse sequence "Motion-adapted SAMPI4" and is generalized to an approach for the design of decoupling sequences whose performance is superior in the presence of molecular motions. It works by cancelling the spin interaction by explicitly averaging the reduced Wigner matrix to zero, rather than utilizing the 2π nutation to average spin interactions. This approach is applicable to both stationary and magic angle spinning solid-state NMR experiments.

  6. A special adapted retractor for the mini-sternotomy approach.

    PubMed

    Massetti, M; Babatasi, G; Bhoyroo, S; Le Page, O; Khayat, A

    1999-07-01

    Minimally invasive cardiac operations are now possible through different approaches. To provide the best exposure and sufficient space to manipulate the heart, a special adapted thoracic retractor has been developed for the ministernotomy approach. It is universally adjustable and provides excellent and consistent exposure especially below the incision edges. The retractor has the further advantage of a very low profile on the surgeon's side and at the cephalic and caudal extremes of the operative field, which permits the greatest possible access through a limited access. We have successfully used this retractor in more than 180 patients. A less invasive median sternotomy through a 6-9-cm incision has been our original approach.

  7. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047

  8. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.

  9. Application of adaptive cluster sampling to low-density populations of freshwater mussels

    USGS Publications Warehouse

    Smith, D.R.; Villella, R.F.; Lemarie, D.P.

    2003-01-01

    Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.

  10. Novel Approaches to Adaptive Angular Approximations in Computational Transport

    SciTech Connect

    Marvin L. Adams; Igor Carron; Paul Nelson

    2006-06-04

    The particle-transport equation is notoriously difficult to discretize accurately, largely because the solution can be discontinuous in every variable. At any given spatial position and energy E, for example, the transport solution  can be discontinuous at an arbitrary number of arbitrary locations in the direction domain. Even if the solution is continuous it is often devoid of smoothness. This makes the direction variable extremely difficult to discretize accurately. We have attacked this problem with adaptive discretizations in the angle variables, using two distinctly different approaches. The first approach used wavelet function expansions directly and exploited their ability to capture sharp local variations. The second used discrete ordinates with a spatially varying quadrature set that adapts to the local solution. The first approach is very different from that in today’s transport codes, while the second could conceivably be implemented in such codes. Both approaches succeed in reducing angular discretization error to any desired level. The work described and results presented in this report add significantly to the understanding of angular discretization in transport problems and demonstrate that it is possible to solve this important long-standing problem in deterministic transport. Our results show that our adaptive discrete-ordinates (ADO) approach successfully: 1) Reduces angular discretization error to user-selected “tolerance” levels in a variety of difficult test problems; 2) Achieves a given error with significantly fewer unknowns than non-adaptive discrete ordinates methods; 3) Can be implemented within standard discrete-ordinates solution techniques, and thus could generate a significant impact on the field in a relatively short time. Our results show that our adaptive wavelet approach: 1) Successfully reduces the angular discretization error to arbitrarily small levels in a variety of difficult test problems, even when using the

  11. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  12. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  13. Free-space fluorescence tomography with adaptive sampling based on anatomical information from microCT

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofeng; Badea, Cristian T.; Hood, Greg; Wetzel, Arthur W.; Stiles, Joel R.; Johnson, G. Allan

    2010-02-01

    Image reconstruction is one of the main challenges for fluorescence tomography. For in vivo experiments on small animals, in particular, the inhomogeneous optical properties and irregular surface of the animal make free-space image reconstruction challenging because of the difficulties in accurately modeling the forward problem and the finite dynamic range of the photodetector. These two factors are fundamentally limited by the currently available forward models and photonic technologies. Nonetheless, both limitations can be significantly eased using a signal processing approach. We have recently constructed a free-space panoramic fluorescence diffuse optical tomography system to take advantage of co-registered microCT data acquired from the same animal. In this article, we present a data processing strategy that adaptively selects the optical sampling points in the raw 2-D fluorescent CCD images. Specifically, the general sampling area and sampling density are initially specified to create a set of potential sampling points sufficient to cover the region of interest. Based on 3-D anatomical information from the microCT and the fluorescent CCD images, data points are excluded from the set when they are located in an area where either the forward model is known to be problematic (e.g., large wrinkles on the skin) or where the signal is unreliable (e.g., saturated or low signal-to-noise ratio). Parallel Monte Carlo software was implemented to compute the sensitivity function for image reconstruction. Animal experiments were conducted on a mouse cadaver with an artificial fluorescent inclusion. Compared to our previous results using a finite element method, the newly developed parallel Monte Carlo software and the adaptive sampling strategy produced favorable reconstruction results.

  14. An Approach to V&V of Embedded Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth

    2004-01-01

    Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,

  15. Adaptation in flower form: a comparative evodevo approach.

    PubMed

    Specht, Chelsea D; Howarth, Dianella G

    2015-04-01

    Evolutionary developmental biology (evodevo) attempts to explain how the process of organismal development evolves, utilizing a comparative approach to investigate changes in developmental pathways and processes that occur during the evolution of a given lineage. Evolutionary genetics uses a population approach to understand how organismal changes in form or function are linked to underlying genetics, focusing on changes in gene and genotype frequencies within populations and the fixation of genotypic variation into traits that define species or evoke speciation events. Microevolutionary processes, including mutation, genetic drift, natural selection and gene flow, can provide the foundation for macroevolutionary patterns observed as morphological evolution and adaptation. The temporal element linking microevolutionary processes to macroevolutionary patterns is development: an organism's genotype is converted to phenotype by ontogenetic processes. Because selection acts upon the phenotype, the connection between evolutionary genetics and developmental evolution becomes essential to understanding adaptive evolution in organismal form and function. Here, we discuss how developmental genetic studies focused on key developmental processes could be linked within a comparative framework to study the developmental genetics of adaptive evolution, providing examples from research on two key processes of plant evodevo - floral symmetry and organ fusion - and their role in the adaptation of floral form. PMID:25470511

  16. The adaptive, cut-cell Cartesian approach (warts and all)

    NASA Astrophysics Data System (ADS)

    Powell, Kenneth G.

    1995-10-01

    Solution-adaptive methods based on cutting bodies out of Cartesian grids are gaining popularity now that the ways of circumventing the accuracy problems associated with small cut cells have been developed. Researchers are applying Cartesian-based schemes to a broad class of problems now, and, although there is still development work to be done, it is becoming clearer which problems are best suited to the approach (and which are not). The purpose of this paper is to give a candid assessment, based on applying Cartesian schemes to a variety of problems, of the strengths and weaknesses of the approach as it is currently implemented.

  17. An Approach for Prioritizing Agile Practices for Adaptation

    NASA Astrophysics Data System (ADS)

    Mikulenas, Gytenis; Kapocius, Kestutis

    Agile software development approaches offer a strong alternative to the traditional plan-driven methodologies that have not been able to warrant successfulness of the software projects. However, the move toward Agile is often hampered by the wealth of alternative practices that are accompanied by numerous success or failure stories. Clearly, the formal methods for choosing most suitable practices are lacking. In this chapter, we present an overview of this problem and propose an approach for prioritization of available practices in accordance to the particular circumstances. The proposal combines ideas from Analytic Hierarchy Process (AHP) decision-making technique, cost-value analysis, and Rule-Description-Practice (RDP) technique. Assumption that such approach could facilitate the Agile adaptation process was supported by the case study of the approach illustrating the process of choosing most suitable Agile practices within a real-life project.

  18. Variable sampling approach to mitigate instability in networked control systems with delays.

    PubMed

    López-Echeverría, Daniel; Magaña, Mario E

    2012-01-01

    This paper analyzes a new alternative approach to compensate for the effects of time delays on a dynamic networked control system (NCS). The approach is based on the use of time-delay-predicted values as the sampling times of the NCS. We use a one-step-ahead prediction algorithm based on an adaptive time delay neural network. The application of pole placement and linear quadratic regulator methods to compute the feedback gains taking into account the estimated time delays is investigated.

  19. Efficient estimation of abundance for patchily distributed populations via two-phase, adaptive sampling.

    USGS Publications Warehouse

    Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.

    2008-01-01

    Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of

  20. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops

    PubMed Central

    Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver

    2015-01-01

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  1. A ``Limited First Sample'' Approach to Mars Sample Return — Lessons from the Apollo Program

    NASA Astrophysics Data System (ADS)

    Eppler, D. B.; Draper, D.; Gruener, J.

    2012-06-01

    Complex, multi-opportunity Mars sample return approaches have failed to be selected as a new start twice since 1985. We advocate adopting a simpler strategy of "grab-and-go" for the initial sample return, similar to the approach taken on Apollo 11.

  2. Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials

    PubMed Central

    Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin

    2016-01-01

    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139

  3. Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials.

    PubMed

    Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin

    2016-01-01

    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139

  4. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  5. Variable Neural Adaptive Robust Control: A Switched System Approach

    SciTech Connect

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.

  6. Motion-adapted pulse sequences for oriented sample (OS) solid-state NMR of biopolymers

    PubMed Central

    Lu, George J.; Opella, Stanley J.

    2013-01-01

    One of the main applications of solid-state NMR is to study the structure and dynamics of biopolymers, such as membrane proteins, under physiological conditions where the polypeptides undergo global motions as they do in biological membranes. The effects of NMR radiofrequency irradiations on nuclear spins are strongly influenced by these motions. For example, we previously showed that the MSHOT-Pi4 pulse sequence yields spectra with resonance line widths about half of those observed using the conventional pulse sequence when applied to membrane proteins undergoing rapid uniaxial rotational diffusion in phospholipid bilayers. In contrast, the line widths were not changed in microcrystalline samples where the molecules did not undergo global motions. Here, we demonstrate experimentally and describe analytically how some Hamiltonian terms are susceptible to sample motions, and it is their removal through the critical π/2 Z-rotational symmetry that confers the “motion adapted” property to the MSHOT-Pi4 pulse sequence. This leads to the design of separated local field pulse sequence “Motion-adapted SAMPI4” and is generalized to an approach for the design of decoupling sequences whose performance is superior in the presence of molecular motions. It works by cancelling the spin interaction by explicitly averaging the reduced Wigner matrix to zero, rather than utilizing the 2π nutation to average spin interactions. This approach is applicable to both stationary and magic angle spinning solid-state NMR experiments. PMID:24006989

  7. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  8. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal

  9. Parallel, grid-adaptive approaches for relativistic hydro and magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Keppens, R.; Meliani, Z.; van Marle, A. J.; Delmont, P.; Vlasis, A.; van der Holst, B.

    2012-02-01

    Relativistic hydro and magnetohydrodynamics provide continuum fluid descriptions for gas and plasma dynamics throughout the visible universe. We present an overview of state-of-the-art modeling in special relativistic regimes, targeting strong shock-dominated flows with speeds approaching the speed of light. Significant progress in its numerical modeling emerged in the last two decades, and we highlight specifically the need for grid-adaptive, shock-capturing treatments found in several contemporary codes in active use and development. Our discussion highlights one such code, MPI-AMRVAC (Message-Passing Interface-Adaptive Mesh Refinement Versatile Advection Code), but includes generic strategies for allowing massively parallel, block-tree adaptive simulations in any dimensionality. We provide implementation details reflecting the underlying data structures as used in MPI-AMRVAC. Parallelization strategies and scaling efficiencies are discussed for representative applications, along with guidelines for data formats suitable for parallel I/O. Refinement strategies available in MPI-AMRVAC are presented, which cover error estimators in use in many modern AMR frameworks. A test suite for relativistic hydro and magnetohydrodynamics is provided, chosen to cover all aspects encountered in high-resolution, shock-governed astrophysical applications. This test suite provides ample examples highlighting the advantages of AMR in relativistic flow problems.

  10. Block-adaptive quantum mechanics: an adaptive divide-and-conquer approach to interactive quantum chemistry.

    PubMed

    Bosson, Maël; Grudinin, Sergei; Redon, Stephane

    2013-03-01

    We present a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. Although quantum chemistry models are known to be computationally demanding, we achieve interactive rates by focusing computational resources on the most active parts of the system. BAQM is based on a divide-and-conquer technique and constrains some nucleus positions and some electronic degrees of freedom on the fly to simplify the simulation. As a result, each time step may be performed significantly faster, which in turn may accelerate attraction to the neighboring local minima. By applying our approach to the nonself-consistent Atom Superposition and Electron Delocalization Molecular Orbital theory, we demonstrate interactive rates and efficient virtual prototyping for systems containing more than a thousand of atoms on a standard desktop computer.

  11. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  12. A robust adaptive sampling method for faster acquisition of MR images.

    PubMed

    Vellagoundar, Jaganathan; Machireddy, Ramasubba Reddy

    2015-06-01

    A robust adaptive k-space sampling method is proposed for faster acquisition and reconstruction of MR images. In this method, undersampling patterns are generated based on magnitude profile of a fully acquired 2-D k-space data. Images are reconstructed using compressive sampling reconstruction algorithm. Simulation experiments are done to assess the performance of the proposed method under various signal-to-noise ratio (SNR) levels. The performance of the method is better than non-adaptive variable density sampling method when k-space SNR is greater than 10dB. The method is implemented on a fully acquired multi-slice raw k-space data and a quality assurance phantom data. Data reduction of up to 60% is achieved in the multi-slice imaging data and 75% is achieved in the phantom imaging data. The results show that reconstruction accuracy is improved over non-adaptive or conventional variable density sampling method. The proposed sampling method is signal dependent and the estimation of sampling locations is robust to noise. As a result, it eliminates the necessity of mathematical model and parameter tuning to compute k-space sampling patterns as required in non-adaptive sampling methods.

  13. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions.

    PubMed

    Jadhao, Vikram; Makri, Nancy

    2010-03-14

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time variant Planck's over 2pibeta/2 illustrate these features.

  14. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions

    NASA Astrophysics Data System (ADS)

    Jadhao, Vikram; Makri, Nancy

    2010-03-01

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time ℏβ /2 illustrate these features.

  15. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  16. Estimating the abundance of clustered animal population by using adaptive cluster sampling and negative binomial distribution

    NASA Astrophysics Data System (ADS)

    Bo, Yizhou; Shifa, Naima

    2013-09-01

    An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.

  17. Extreme Sea Levels and Approaches to Adaptation in Germany

    NASA Astrophysics Data System (ADS)

    Weisse, R.; Kappenberg, J.; Sothmann, J.

    2014-12-01

    Germany's coastal areas are exposed to extra-tropical storms and related marine hazards such as wind waves and storm surges. About 50% of the coast is below 5 m NN and considerable parts are protected by an almost continuous dike line. Rising mean and extreme sea levels provide substantial threat. In this presentation we briefly review the present situation. Storm related sea level changes are characterized by pronounced inter-annual and decadal variability but do not show a long-term trend over the last century. Mean sea level has increased over the past about 100-150 years at a rate roughly comparable to global mean sea level rise. As a consequence extreme sea levels have increased in the area as increasing mean sea level shifts the baseline for storm surges and wind waves towards higher values. Different approaches for adaptation are investigated in a number ongoing research projects. Some case studies for potential adaptation and challenges are presented. Examples range from detailed analyses of retreat and accommodation strategies to multi-purpose strategies such as concepts for sustainable development of tidal estuaries.

  18. Salt stress adaptation of Bacillus subtilis: a physiological proteomics approach.

    PubMed

    Höper, Dirk; Bernhardt, Jörg; Hecker, Michael

    2006-03-01

    The adaptation to osmotic stress is crucial for growth and survival of Bacillus subtilis in its natural ecosystem. Dual channel imaging and warping of 2-D protein gels were used to visualize global changes in the protein synthesis pattern of cells in response to osmotic stress (6% NaCl). Many vegetative enzymes were repressed in response to salt stress and derepressed after resumption of growth. The enzymes catalyzing the metabolic steps from glucose to 2-oxoglutarate, however, were almost constantly synthesized during salt stress despite the growth arrest. This indicates an enhanced need for the proline precursor glutamate. The synthesis of enzymes involved in sulfate assimilation and in the formation of Fe-S clusters was also induced, suggesting an enhanced need for the formation or repair of Fe-S clusters in response to salt stress. One of the most obvious changes in the protein synthesis profile can be followed by the very strong induction of the SigB regulon. Furthermore, members of the SigW regulon and of the PerR regulon, indicating oxidative stress after salt challenge, were also induced. This proteomic approach provides an overview of cell adaptation to an osmotic upshift in B. subtilis visualizing the most dramatic changes in the protein synthesis pattern.

  19. ANALYSIS OF RADIAL VELOCITY DATA BY A NOVEL ADAPTIVE APPROACH

    SciTech Connect

    Babu, P.; Stoica, P.; Li, J.; Chen, Z.; Ge, J.

    2010-02-15

    In this paper, we introduce an estimation technique for analyzing radial velocity data commonly encountered in extrasolar planet detection. We discuss the Keplerian model for radial velocity data measurements and introduce a technique named the iterative adaptive approach (IAA) to estimate the three-dimensional spectrum (power versus eccentricity, orbital period and periastron passage time) of the radial velocity data. We then discuss different ways to regularize the IAA algorithm in the presence of noise and measurement errors. We also discuss briefly the computational aspects of the method and introduce a computationally efficient version of IAA. Finally, we establish the significance of the spectral peaks by using a relaxation maximum likelihood algorithm and a generalized likelihood ratio test. Numerical experiments are carried out on both simulated and real life data sets to evaluate the performance of our method. The real life data sets discussed are radial velocity measurements of the stars HD 63454, HD 208487, and GJ 876.

  20. Adaptive Neuro-fuzzy approach in friction identification

    NASA Astrophysics Data System (ADS)

    Zaiyad Muda @ Ismail, Muhammad

    2016-05-01

    Friction is known to affect the performance of motion control system, especially in terms of its accuracy. Therefore, a number of techniques or methods have been explored and implemented to alleviate the effects of friction. In this project, the Artificial Intelligent (AI) approach is used to model the friction which will be then used to compensate the friction. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is chosen among several other AI methods because of its reliability and capabilities of solving complex computation. ANFIS is a hybrid AI-paradigm that combines the best features of neural network and fuzzy logic. This AI method (ANFIS) is effective for nonlinear system identification and compensation and thus, being used in this project.

  1. A sampling approach for protein backbone fragment conformations.

    PubMed

    Yu, J Y; Zhang, W

    2013-01-01

    In protein structure prediction, backbone fragment bias information can narrow down the conformational space of the whole polypeptide chain significantly. Unlike existing methods that use fragments as building blocks, the paper presents a probabilistic sampling approach for protein backbone torsion angles by modelling angular correlation of (phi, psi) with a directional statistics distribution. Given a protein sequence and secondary structure information, this method samples backbone fragments conformations by using a backtrack sampling algorithm for the hidden Markov model with multiple inputs and a single output. The proposed approach is applied to a fragment library, and some well-known structural motifs are sampled very well on the optimal path. Computational results show that the method can help to obtain native-like backbone fragments conformations. PMID:23777175

  2. Adapting to Uncertainty: Comparing Methodological Approaches to Climate Adaptation and Mitigation Policy

    NASA Astrophysics Data System (ADS)

    Huda, J.; Kauneckis, D. L.

    2013-12-01

    Climate change adaptation represents a number of unique policy-making challenges. Foremost among these is dealing with the range of future climate impacts to a wide scope of inter-related natural systems, their interaction with social and economic systems, and uncertainty resulting from the variety of downscaled climate model scenarios and climate science projections. These cascades of uncertainty have led to a number of new approaches as well as a reexamination of traditional methods for evaluating risk and uncertainty in policy-making. Policy makers are required to make decisions and formulate policy irrespective of the level of uncertainty involved and while a debate continues regarding the level of scientific certainty required in order to make a decision, incremental change in the climate policy continues at multiple governance levels. This project conducts a comparative analysis of the range of methodological approaches that are evolving to address uncertainty in climate change policy. It defines 'methodologies' to include a variety of quantitative and qualitative approaches involving both top-down and bottom-up policy processes that attempt to enable policymakers to synthesize climate information into the policy process. The analysis examines methodological approaches to decision-making in climate policy based on criteria such as sources of policy choice information, sectors to which the methodology has been applied, sources from which climate projections were derived, quantitative and qualitative methods used to deal with uncertainty, and the benefits and limitations of each. A typology is developed to better categorize the variety of approaches and methods, examine the scope of policy activities they are best suited for, and highlight areas for future research and development.

  3. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  4. A new spectral variable selection pattern using competitive adaptive reweighted sampling combined with successive projections algorithm.

    PubMed

    Tang, Guo; Huang, Yue; Tian, Kuangda; Song, Xiangzhong; Yan, Hong; Hu, Jing; Xiong, Yanmei; Min, Shungeng

    2014-10-01

    The competitive adaptive reweighted sampling-successive projections algorithm (CARS-SPA) method was proposed as a novel variable selection approach to process multivariate calibration. The CARS was first used to select informative variables, and then SPA to refine the variables with minimum redundant information. The proposed method was applied to near-infrared (NIR) reflectance data of nicotine in tobacco lamina and NIR transmission data of active ingredient in pesticide formulation. As a result, fewer but more informative variables were selected by CARS-SPA than by direct CARS. In the system of pesticide formulation, a multiple linear regression (MLR) model using variables selected by CARS-SPA provided a better prediction than the full-range partial least-squares (PLS) model, successive projections algorithm (SPA) model and uninformative variables elimination-successive projections algorithm (UVE-SPA) processed model. The variable subsets selected by CARS-SPA included the spectral ranges with sufficient chemical information, whereas the uninformative variables were hardly selected.

  5. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method

    SciTech Connect

    Schnöller, Johannes Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-15

    Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)

  6. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  7. Career Adapt-Abilities Scale in a French-Speaking Swiss Sample: Psychometric Properties and Relationships to Personality and Work Engagement

    ERIC Educational Resources Information Center

    Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre

    2012-01-01

    The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…

  8. An integrated approach for multi-level sample size determination

    SciTech Connect

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-12-31

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ``attributes`` involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization.

  9. A Functional Approach To Uncover the Low-Temperature Adaptation Strategies of the Archaeon Methanosarcina barkeri

    PubMed Central

    McCay, Paul; Fuszard, Matthew; Botting, Catherine H.; Abram, Florence; O'Flaherty, Vincent

    2013-01-01

    Low-temperature anaerobic digestion (LTAD) technology is underpinned by a diverse microbial community. The methanogenic archaea represent a key functional group in these consortia, undertaking CO2 reduction as well as acetate and methylated C1 metabolism with subsequent biogas (40 to 60% CH4 and 30 to 50% CO2) formation. However, the cold adaptation strategies, which allow methanogens to function efficiently in LTAD, remain unclear. Here, a pure-culture proteomic approach was employed to study the functional characteristics of Methanosarcina barkeri (optimum growth temperature, 37°C), which has been detected in LTAD bioreactors. Two experimental approaches were undertaken. The first approach aimed to characterize a low-temperature shock response (LTSR) of M. barkeri DSMZ 800T grown at 37°C with a temperature drop to 15°C, while the second experimental approach aimed to examine the low-temperature adaptation strategies (LTAS) of the same strain when it was grown at 15°C. The latter experiment employed cell viability and growth measurements (optical density at 600 nm [OD600]), which directly compared M. barkeri cells grown at 15°C with those grown at 37°C. During the LTSR experiment, a total of 127 proteins were detected in 37°C and 15°C samples, with 20 proteins differentially expressed with respect to temperature, while in the LTAS experiment 39% of proteins identified were differentially expressed between phases of growth. Functional categories included methanogenesis, cellular information processing, and chaperones. By applying a polyphasic approach (proteomics and growth studies), insights into the low-temperature adaptation capacity of this mesophilically characterized methanogen were obtained which suggest that the metabolically diverse Methanosarcinaceae could be functionally relevant for LTAD systems. PMID:23645201

  10. Region and edge-adaptive sampling and boundary completion for segmentation

    SciTech Connect

    Dillard, Scott E; Prasad, Lakshman; Grazzini, Jacopo A

    2010-01-01

    Edge detection produces a set of points that are likely to lie on discontinuities between objects within an image. We consider faces of the Gabriel graph of these points, a sub-graph of the Delaunay triangulation. Features are extracted by merging these faces using size, shape and color cues. We measure regional properties of faces using a novel shape-dependant sampling method that overcomes undesirable sampling bias of the Delaunay triangles. Instead, sampling is biased so as to smooth regional statistics within the detected object boundaries, and this smoothing adapts to local geometric features of the shape such as curvature, thickness and straightness.

  11. The Vineland Adaptive Behavior Scale in a sample of normal French Children: a research note.

    PubMed

    Fombonne, E; Achard, S

    1993-09-01

    The Vineland Adaptive Behavior scale (survey form) was used in a sample of 151 normal children under age 18. Standardized mean scores of French children were comparable to those of the American normative sample. From the age of 6 onwards, French children scored consistently lower in the Daily Living Skills domain though the magnitude of this difference remained moderate. While the overall findings support the cross-cultural stability of the psychometric properties of this instrument, attention is drawn to potential problems in the use of the Vineland scales, with special reference to autistic samples.

  12. Novel Approaches for Fungal Transcriptomics from Host Samples

    PubMed Central

    Amorim-Vaz, Sara; Sanglard, Dominique

    2016-01-01

    Candida albicans adaptation to the host requires a profound reprogramming of the fungal transcriptome as compared to in vitro laboratory conditions. A detailed knowledge of the C. albicans transcriptome during the infection process is necessary in order to understand which of the fungal genes are important for host adaptation. Such genes could be thought of as potential targets for antifungal therapy. The acquisition of the C. albicans transcriptome is, however, technically challenging due to the low proportion of fungal RNA in host tissues. Two emerging technologies were used recently to circumvent this problem. One consists of the detection of low abundance fungal RNA using capture and reporter gene probes which is followed by emission and quantification of resulting fluorescent signals (nanoString). The other is based first on the capture of fungal RNA by short biotinylated oligonucleotide baits covering the C. albicans ORFome permitting fungal RNA purification. Next, the enriched fungal RNA is amplified and subjected to RNA sequencing (RNA-seq). Here we detail these two transcriptome approaches and discuss their advantages and limitations and future perspectives in microbial transcriptomics from host material. PMID:26834721

  13. Importance Sampling Approach for the Nonstationary Approximation Error Method

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Lehikoinen, A.; Hämäläinen, J.; Kaipio, J. P.

    2010-09-01

    The approximation error approach has been earlier proposed to handle modelling, numerical and computational errors in inverse problems. The idea of the approach is to include the errors to the forward model and compute the approximate statistics of the errors using Monte Carlo sampling. This can be a computationally tedious task but the key property of the approach is that the approximate statistics can be calculated off-line before measurement process takes place. In nonstationary problems, however, information is accumulated over time, and the initial uncertainties may turn out to have been exaggerated. In this paper, we propose an importance weighing algorithm with which the approximation error statistics can be updated during the accumulation of measurement information. As a computational example, we study an estimation problem that is related to a convection-diffusion problem in which the velocity field is not accurately specified.

  14. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  15. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  16. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  17. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  18. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.

  19. An adapted tissue microarray for the development of a matrix arrangement of tissue samples.

    PubMed

    Gurgel, Daniel C; Dornelas, Conceição A; Lima-Júnior, Roberto C P; Ribeiro, Ronaldo A; Almeida, Paulo R C

    2012-03-15

    The arrangement of tissue samples in a matrix, known as the tissue microarray (TMA) method, is a well-recognized method worldwide. This technique makes it possible to assess the expression of molecular markers on a large scale with high yields in terms of time, costs, and archived material. Some researchers are trying to adapt the technique to expand the research possibilities. This study proposes an adaptive simplification of low-cost instruments for obtaining samples that will be used in the construction of the TMA. The use of a manual leather puncher, which has a very low cost and a long expected life and eliminates the need to use a press machine, is a simple and effective alternative to building blocks of tissue microarrays.

  20. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  1. Mars sample return, updated to a groundbreaking approach

    NASA Technical Reports Server (NTRS)

    Mattingly, R.; Matovsek, S.; Jordan, F.

    2002-01-01

    A Mars Sample Return (MSR) mission is a goal of the Mars Program. Recently, NASA and JPL have been studying the possibility of a Mars Sample Return some time in the next decade of Mars exploration. In 2001, JPL commissioned four industry teams to make a fresh examination of MSR architectures. Six papers on these studies were presented at last year's conference. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that these MSR concepts, which included mobility and subsurface sample acquisition, did not fit reasonably within a balanced program. Therefore, at the request of NASA and the science community, JPL asked the four industry teams plus JPL's Team X to explore ways to reduce the cost of a MSR. A NASA-created MSR Science Steering Group (SSG) established a reduced set of requirements for these new studies that built upon the previous year's work. As a result, a new 'Groundbreaking' approach to MSR was established that is well understood based on the studies and independent cost assessments by Aerospace Corporation and SAIC. The Groundbreaking approach appears to be what a contemporary, balanced Mars Exploration Program can afford, has turned out to be justifiable by the MSR Science Steering Group, and has been endorsed by the Mars science community at large. This paper gives a brief overview of the original 2001 study results and discusses the process leading to the new studies, the studies themselves, and the results.

  2. New approaches to nanoparticle sample fabrication for atom probe tomography.

    PubMed

    Felfer, P; Li, T; Eder, K; Galinski, H; Magyar, A P; Bell, D C; Smith, G D W; Kruse, N; Ringer, S P; Cairney, J M

    2015-12-01

    Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10-20 nm core-shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ± 1 nm.

  3. Approach for Using Learner Satisfaction to Evaluate the Learning Adaptation Policy

    ERIC Educational Resources Information Center

    Jeghal, Adil; Oughdir, Lahcen; Tairi, Hamid; Radouane, Abdelhay

    2016-01-01

    The learning adaptation is a very important phase in a learning situation in human learning environments. This paper presents the authors' approach used to evaluate the effectiveness of learning adaptive systems. This approach is based on the analysis of learner satisfaction notices collected by a questionnaire on a learning situation; to analyze…

  4. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  5. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control

  6. Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.

    PubMed

    Vallejo-Medina, Pablo; Sierra, Juan Carlos

    2015-04-21

    The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1.

  7. Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.

    PubMed

    Vallejo-Medina, Pablo; Sierra, Juan Carlos

    2015-01-01

    The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1. PMID:25896498

  8. Adaptive k-space sampling design for edge-enhanced DCE-MRI using compressed sensing.

    PubMed

    Raja, Rajikha; Sinha, Neelam

    2014-09-01

    The critical challenge in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is the trade-off between spatial and temporal resolution due to the limited availability of acquisition time. To address this, it is imperative to under-sample k-space and to develop specific reconstruction techniques. Our proposed method reconstructs high-quality images from under-sampled dynamic k-space data by proposing two main improvements; i) design of an adaptive k-space sampling lattice and ii) edge-enhanced reconstruction technique. A high-resolution data set obtained before the start of the dynamic phase is utilized. The sampling pattern is designed to adapt to the nature of k-space energy distribution obtained from the static high-resolution data. For image reconstruction, the well-known compressed sensing-based total variation (TV) minimization constrained reconstruction scheme is utilized by incorporating the gradient information obtained from the static high-resolution data. The proposed method is tested on seven real dynamic time series consisting of 2 breast data sets and 5 abdomen data sets spanning 1196 images in all. For data availability of only 10%, performance improvement is seen across various quality metrics. Average improvements in Universal Image Quality Index and Structural Similarity Index Metric of up to 28% and 24% on breast data and about 17% and 9% on abdomen data, respectively, are obtained for the proposed method as against the baseline TV reconstruction with variable density random sampling pattern.

  9. Importance-Sampling Monte Carlo Approach to Classical Spin Systems

    NASA Astrophysics Data System (ADS)

    Huang, Hsing-Mei

    A new approach for carrying out static Monte Carlo calculations of thermodynamic quantities for classical spin systems is proposed. Combining the ideas of coincidence countings and importance samplings, we formulate a scheme for obtaining Γ(E), the number of states for a fixed energy E, and use Γ(E) to compute thermodynamic properties. Using the Ising model as an example, we demonstrate that our procedure leads to accurate numerical results without excessive use of computer time. We also show that the procedure is easily extended to obtaining magnetic properties of the Ising model.

  10. An examination of the handheld adapter approach for measuring hand-transmitted vibration exposure

    PubMed Central

    Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.

    2015-01-01

    The use of a handheld adapter equipped with a tri-axial accelerometer is the most convenient and efficient approach for measuring vibration exposure at the hand-tool interface, especially when the adapter is incorporated into a miniature handheld or wrist-strapped dosimeter. To help optimize the adapter approach, the specific aims of this study are to identify and understand the major sources and mechanisms of measurement errors and uncertainties associated with using these adapters, and to explore their improvements. Five representative adapter models were selected and used in the experiment. Five human subjects served as operators in the experiment on a hand-arm vibration test system. The results of this study confirm that many of the handheld adapters can produce substantial overestimations of vibration exposure, and measurement errors can significantly vary with tool, adapter model, mounting position, mounting orientation, and subject. Major problems with this approach include unavoidable influence of the hand dynamic motion on the adapter, unstable attachment, insufficient attachment contact force, and inappropriate adapter structure. However, the results of this study also suggest that measurement errors can be substantially reduced if the design and use of an adapter can be systematically optimized toward minimizing the combined effects of the identified factors. Some potential methods for improving the design and use of the adapters are also proposed and discussed. PMID:26744580

  11. Classification of EEG for Affect Recognition: An Adaptive Approach

    NASA Astrophysics Data System (ADS)

    Alzoubi, Omar; Calvo, Rafael A.; Stevens, Ronald H.

    Research on affective computing is growing rapidly and new applications are being developed more frequently. They use information about the affective/mental states of users to adapt their interfaces or add new functionalities. Face activity, voice, text physiology and other information about the user are used as input to affect recognition modules, which are built as classification algorithms. Brain EEG signals have rarely been used to build such classifiers due to the lack of a clear theoretical framework. We present here an evaluation of three different classification techniques and their adaptive variations of a 10-class emotion recognition experiment. Our results show that affect recognition from EEG signals might be possible and an adaptive algorithm improves the performance of the classification task.

  12. Taking a Broad Approach to Public Health Program Adaptation: Adapting a Family-Based Diabetes Education Program

    ERIC Educational Resources Information Center

    Reinschmidt, Kerstin M.; Teufel-Shone, Nicolette I.; Bradford, Gail; Drummond, Rebecca L.; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K.

    2010-01-01

    Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American…

  13. An integrated sampling and analysis approach for improved biodiversity monitoring

    USGS Publications Warehouse

    DeWan, Amielle A.; Zipkin, Elise F.

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  14. The Portuguese adaptation of the Gudjonsson Suggestibility Scale (GSS1) in a sample of inmates.

    PubMed

    Pires, Rute; Silva, Danilo R; Ferreira, Ana Sousa

    2014-01-01

    This paper comprises two studies which address the validity of the Portuguese adaptation of the Gudjonsson Suggestibility Scale, GSS1. In study 1, the means and standard deviations for the suggestibility results of a sample of Portuguese inmates (N=40, Mage=37.5 years, SD=8.1) were compared to those of a sample of Icelandic inmates (Gudjonsson, 1997; Gudjonsson & Sigurdsson, 1996). Portuguese inmates' results were in line with the original results. In study 2, the means and standard deviations for the suggestibility results of the sample of Portuguese inmates were compared to those of a general Portuguese population sample (N=57, Mage=36.1 years, SD=12.7). The forensic sample obtained significantly higher scores in suggestibility measures than the general population sample. ANOVA confirmed that the increased suggestibility in the inmates sample was due to the limited memory capacity of this latter group. Given that the results of both studies 1 and 2 are in keeping with the author's original results (Gudjonsson, 1997), this may be regarded as a confirmation of the validity of the Portuguese GSS1.

  15. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  16. Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach

    ERIC Educational Resources Information Center

    Notebaert, Wim; Verguts, Tom

    2007-01-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…

  17. Adaptive E-Learning Environments: Research Dimensions and Technological Approaches

    ERIC Educational Resources Information Center

    Di Bitonto, Pierpaolo; Roselli, Teresa; Rossano, Veronica; Sinatra, Maria

    2013-01-01

    One of the most closely investigated topics in e-learning research has always been the effectiveness of adaptive learning environments. The technological evolutions that have dramatically changed the educational world in the last six decades have allowed ever more advanced and smarter solutions to be proposed. The focus of this paper is to depict…

  18. The Canadian approach to the settlement and adaptation of immigrants.

    PubMed

    1986-01-01

    Canada has been the host to over 400,000 refugees since World War II. The settlement and adaptation process is supported by the federal government and by the majority of provincial governments. Under the national and regional Employment and Immigration Commission CEIC) settlement organizations the major programs administered to effect the adaptation of newcomers are: 1) the Adjustment Assistance Program, 2) the Immigrant Settlement and Adaptation Program, 3) the Language/Skill Training Program, and 4) the Employment Services Program. Ontario, the recipient of more than 1/2 the newcomers that arrive in Canada each year, pursues active programs in the reception of newcomers through their Welcome House Program which offers a wide range of reception services to the newcomers. The employment and unemployment experiences of refugees is very much influenced by the prevailing labor market conditions, the refugees' proficiency in the country's official languages, the amount of sympathy evoked by the media reports on the plight of refugees, the availability of people of the same ethnic origin already well settled in the country, and the adaptability of the refugees themselves. The vast majority of refugee groups that came to Canada during the last 1/4 century seem to have adjusted well economically, despite having had difficulty in entering the occupations they intended to join. It is calculated that an average of $6607 per arrival is needed to cover the CEIC program costs of 1983-1984.

  19. Monte Carlo path sampling approach to modeling aeolian sediment transport

    NASA Astrophysics Data System (ADS)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

  20. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  1. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  2. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  3. An approach to fabrication of large adaptive optics mirrors

    NASA Astrophysics Data System (ADS)

    Schwartz, Eric; Rey, Justin; Blaszak, David; Cavaco, Jeffrey

    2014-07-01

    For more than two decades, Northrop Grumman Xinetics has been the principal supplier of small deformable mirrors that enable adaptive optical (AO) systems for the ground-based astronomical telescope community. With today's drive toward extremely large aperture systems, and the desire of telescope designers to include adaptive optics in the main optical path of the telescope, Xinetics has recognized the need for large active mirrors with the requisite bandwidth and actuator stoke. Presented in this paper is the proposed use of Northrop Grumman Xinetics' large, ultra-lightweight Silicon Carbide substrates with surface parallel actuation of sufficient spatial density and bandwidth to meet the requirements of tomorrow's AO systems, while reducing complexity and cost.

  4. Assessing confidence in management adaptation approaches for climate-sensitive ecosystems

    NASA Astrophysics Data System (ADS)

    West, J. M.; Julius, S. H.; Weaver, C. P.

    2012-03-01

    A number of options are available for adapting ecosystem management to improve resilience in the face of climatic changes. However, uncertainty exists as to the effectiveness of these options. A report prepared for the US Climate Change Science Program reviewed adaptation options for a range of federally managed systems in the United States. The report included a qualitative uncertainty analysis of conceptual approaches to adaptation derived from the review. The approaches included reducing anthropogenic stressors, protecting key ecosystem features, maintaining representation, replicating, restoring, identifying refugia and relocating organisms. The results showed that the expert teams had the greatest scientific confidence in adaptation options that reduce anthropogenic stresses. Confidence in other approaches was lower because of gaps in understanding of ecosystem function, climate change impacts on ecosystems, and management effectiveness. This letter discusses insights gained from the confidence exercise and proposes strategies for improving future assessments of confidence for management adaptations to climate change.

  5. A regional approach to climate adaptation in the Nile Basin

    NASA Astrophysics Data System (ADS)

    Butts, Michael B.; Buontempo, Carlo; Lørup, Jens K.; Williams, Karina; Mathison, Camilla; Jessen, Oluf Z.; Riegels, Niels D.; Glennie, Paul; McSweeney, Carol; Wilson, Mark; Jones, Richard; Seid, Abdulkarim H.

    2016-10-01

    The Nile Basin is one of the most important shared basins in Africa. Managing and developing the water resources within the basin must not only address different water uses but also the trade-off between developments upstream and water use downstream, often between different countries. Furthermore, decision-makers in the region need to evaluate and implement climate adaptation measures. Previous work has shown that the Nile flows can be highly sensitive to climate change and that there is considerable uncertainty in climate projections in the region with no clear consensus as to the direction of change. Modelling current and future changes in river runoff must address a number of challenges; including the large size of the basin, the relative scarcity of data, and the corresponding dramatic variety of climatic conditions and diversity in hydrological characteristics. In this paper, we present a methodology, to support climate adaptation on a regional scale, for assessing climate change impacts and adaptation potential for floods, droughts and water scarcity within the basin.

  6. The adaptive significance of adult neurogenesis: an integrative approach

    PubMed Central

    Konefal, Sarah; Elliot, Mick; Crespi, Bernard

    2013-01-01

    Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188

  7. The Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Xin, Tao

    2013-01-01

    The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…

  8. Evidence of an Adaptive Level Grading Practice through a Causal Approach.

    ERIC Educational Resources Information Center

    Gallini, Joan

    1982-01-01

    The adaptation level theory in grading implies that students select major programs whose grading practices are realistic with their ability. A causal approach using a system of multiple equations was used to investigate this theory. The results lent support to occurrence of the adaptive grading practice. (Author/CM)

  9. An Adaptive Approach to Managing Knowledge Development in a Project-Based Learning Environment

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…

  10. Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis

    PubMed Central

    McIsaac, Michael A; Cook, Richard J

    2015-01-01

    Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124

  11. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  12. Adaptive sampling dual terahertz comb spectroscopy using dual free-running femtosecond lasers

    PubMed Central

    Yasui, Takeshi; Ichikawa, Ryuji; Hsieh, Yi-Da; Hayashi, Kenta; Cahyadi, Harsono; Hindle, Francis; Sakaguchi, Yoshiyuki; Iwata, Tetsuo; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Minoshima, Kaoru; Inaba, Hajime

    2015-01-01

    Terahertz (THz) dual comb spectroscopy (DCS) is a promising method for high-accuracy, high-resolution, broadband THz spectroscopy because the mode-resolved THz comb spectrum includes both broadband THz radiation and narrow-line CW-THz radiation characteristics. In addition, all frequency modes of a THz comb can be phase-locked to a microwave frequency standard, providing excellent traceability. However, the need for stabilization of dual femtosecond lasers has often hindered its wide use. To overcome this limitation, here we have demonstrated adaptive-sampling THz-DCS, allowing the use of free-running femtosecond lasers. To correct the fluctuation of the time and frequency scales caused by the laser timing jitter, an adaptive sampling clock is generated by dual THz-comb-referenced spectrum analysers and is used for a timing clock signal in a data acquisition board. The results not only indicated the successful implementation of THz-DCS with free-running lasers but also showed that this configuration outperforms standard THz-DCS with stabilized lasers due to the slight jitter remained in the stabilized lasers. PMID:26035687

  13. Making CORBA objects persistent: The object database adapter approach

    SciTech Connect

    Reverbel, F.C.R.

    1997-05-01

    In spite of its remarkable successes in promoting standards for distributed object systems, the Object Management Group (OMG) has not yet settled the issue of object persistence in the Object Request Broker (ORB) environment. The Common Object Request Broker Architecture (CORBA) specification briefly mentions an Object-Oriented Database Adapter that makes objects stored in an object-oriented database accessible through the ORB. This idea is pursued in the Appendix B of the ODMG standard, which identifies a number of issues involved in using an Object Database Management System (ODBMS) in a CORBA environment, and proposes an Object Database Adapter (ODA) to realize the integration of the ORB with the ODBMS. This paper discusses the design and implementation of an ODA that integrates an ORB and an ODBMS with C++ bindings. For the author`s purposes, an ODBMS is a system with programming interfaces. It may be a pure object-oriented DBMS (an OODBMS), or a combination of a relational DBMS and an object-relational mapper.

  14. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  15. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive

  16. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided.

  17. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach

    NASA Astrophysics Data System (ADS)

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-05-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.

  18. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449

  19. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach.

    PubMed

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-01-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347-356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205-214), and isoform 1 of fibrinogen α chain precursor (FGA 588-624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.

  20. [Molecular genetic bases of adaptation processes and approaches to their analysis].

    PubMed

    Salmenkova, E A

    2013-01-01

    Great interest in studying the molecular genetic bases of the adaptation processes is explained by their importance in understanding evolutionary changes, in the development ofintraspecific and interspecific genetic diversity, and in the creation of approaches and programs for maintaining and restoring the population. The article examines the sources and conditions for generating adaptive genetic variability and contribution of neutral and adaptive genetic variability to the population structure of the species; methods for identifying the adaptive genetic variability on the genome level are also described. Considerable attention is paid to the potential of new technologies of genome analysis, including next-generation sequencing and some accompanying methods. In conclusion, the important role of the joint use of genomics and proteomics approaches in understanding the molecular genetic bases of adaptation is emphasized.

  1. Computational prediction of riboswitch tertiary structures including pseudoknots by RAGTOP: a hierarchical graph sampling approach.

    PubMed

    Kim, Namhee; Zahran, Mai; Schlick, Tamar

    2015-01-01

    The modular organization of RNA structure has been exploited in various computational and theoretical approaches to identify RNA tertiary (3D) motifs and assemble RNA structures. Riboswitches exemplify this modularity in terms of both structural and functional adaptability of RNA components. Here, we extend our computational approach based on tree graph sampling to the prediction of riboswitch topologies by defining additional edges to mimick pseudoknots. Starting from a secondary (2D) structure, we construct an initial graph deduced from predicted junction topologies by our data-mining algorithm RNAJAG trained on known RNAs; we sample these graphs in 3D space guided by knowledge-based statistical potentials derived from bending and torsion measures of internal loops as well as radii of gyration for known RNAs. We present graph sampling results for 10 representative riboswitches, 6 of them with pseudoknots, and compare our predictions to solved structures based on global and local RMSD measures. Our results indicate that the helical arrangements in riboswitches can be approximated using our combination of modified 3D tree graph representations for pseudoknots, junction prediction, graph moves, and scoring functions. Future challenges in the field of riboswitch prediction and design are also discussed. PMID:25726463

  2. An Evidence-Based Public Health Approach to Climate Change Adaptation

    PubMed Central

    Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George

    2014-01-01

    Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public

  3. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  4. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  5. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  6. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  7. Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste

    We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.

  8. A global sampling approach to designing and reengineering RNA secondary structures.

    PubMed

    Levin, Alex; Lis, Mieszko; Ponty, Yann; O'Donnell, Charles W; Devadas, Srinivas; Berger, Bonnie; Waldispühl, Jérôme

    2012-11-01

    The development of algorithms for designing artificial RNA sequences that fold into specific secondary structures has many potential biomedical and synthetic biology applications. To date, this problem remains computationally difficult, and current strategies to address it resort to heuristics and stochastic search techniques. The most popular methods consist of two steps: First a random seed sequence is generated; next, this seed is progressively modified (i.e. mutated) to adopt the desired folding properties. Although computationally inexpensive, this approach raises several questions such as (i) the influence of the seed; and (ii) the efficiency of single-path directed searches that may be affected by energy barriers in the mutational landscape. In this article, we present RNA-ensign, a novel paradigm for RNA design. Instead of taking a progressive adaptive walk driven by local search criteria, we use an efficient global sampling algorithm to examine large regions of the mutational landscape under structural and thermodynamical constraints until a solution is found. When considering the influence of the seeds and the target secondary structures, our results show that, compared to single-path directed searches, our approach is more robust, succeeds more often and generates more thermodynamically stable sequences. An ensemble approach to RNA design is thus well worth pursuing as a complement to existing approaches. RNA-ensign is available at http://csb.cs.mcgill.ca/RNAensign. PMID:22941632

  9. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  10. Adaptation to floods in future climate: a practical approach

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, Joanna; Romanowicz, Renata; Radon, Radoslaw; Hisdal, Hege

    2016-04-01

    In this study some aspects of the application of the 1D hydraulic model are discussed with a focus on its suitability for flood adaptation under future climate conditions. The Biała Tarnowska catchment is used as a case study. A 1D hydraulic model is developed for the evaluation of inundation extent and risk maps in future climatic conditions. We analyse the following flood indices: (i) extent of inundation area; (ii) depth of water on flooded land; (iii) the flood wave duration; (iv) the volume of a flood wave over the threshold value. In this study we derive a model cross-section geometry following the results of primary research based on a 500-year flood inundation extent. We compare two methods of localisation of cross-sections from the point of view of their suitability to the derivation of the most precise inundation outlines. The aim is to specify embankment heights along the river channel that would protect the river valley in the most vulnerable locations under future climatic conditions. We present an experimental design for scenario analysis studies and uncertainty reduction options for future climate projections obtained from the EUROCORDEX project. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.

  11. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  12. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  13. Real-time nutrient monitoring in rivers: adaptive sampling strategies, technological challenges and future directions

    NASA Astrophysics Data System (ADS)

    Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Bradley, Chris

    2016-04-01

    Excessive nutrient concentrations in river waters threaten aquatic ecosystem functioning and can pose substantial risks to human health. Robust monitoring strategies are therefore required to generate reliable estimates of river nutrient loads and to improve understanding of the catchment processes that drive spatiotemporal patterns in nutrient fluxes. Furthermore, these data are vital for prediction of future trends under changing environmental conditions and thus the development of appropriate mitigation measures. In recent years, technological developments have led to an increase in the use of continuous in-situ nutrient analysers, which enable measurements at far higher temporal resolutions than can be achieved with discrete sampling and subsequent laboratory analysis. However, such instruments can be costly to run and difficult to maintain (e.g. due to high power consumption and memory requirements), leading to trade-offs between temporal and spatial monitoring resolutions. Here, we highlight how adaptive monitoring strategies, comprising a mixture of temporal sample frequencies controlled by one or more 'trigger variables' (e.g. river stage, turbidity, or nutrient concentration), can advance our understanding of catchment nutrient dynamics while simultaneously overcoming many of the practical and economic challenges encountered in typical in-situ river nutrient monitoring applications. We present examples of short-term variability in river nutrient dynamics, driven by complex catchment behaviour, which support our case for the development of monitoring systems that can adapt in real-time to rapid environmental changes. In addition, we discuss the advantages and disadvantages of current nutrient monitoring techniques, and suggest new research directions based on emerging technologies and highlight how these might improve: 1) monitoring strategies, and 2) understanding of linkages between catchment processes and river nutrient fluxes.

  14. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.

    PubMed

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315

  15. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.

    PubMed

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation.

  16. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil

    PubMed Central

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315

  17. Developmental Structuralist Approach to the Classification of Adaptive and Pathologic Personality Organizations: Infancy and Early Childhood.

    ERIC Educational Resources Information Center

    Greenspan, Stanley I.; Lourie, Reginald S.

    This paper applies a developmental structuralist approach to the classification of adaptive and pathologic personality organizations and behavior in infancy and early childhood, and it discusses implications of this approach for preventive intervention. In general, as development proceeds, the structural capacity of the developing infant and child…

  18. Adaptation of a weighted regression approach to evaluate water quality trends in anestuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...

  19. Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...

  20. Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning

    ERIC Educational Resources Information Center

    Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…

  1. A continuum of approaches toward developing culturally focused prevention interventions: from adaptation to grounding.

    PubMed

    Okamoto, Scott K; Kulis, Stephen; Marsiglia, Flavio F; Steiker, Lori K Holleran; Dustman, Patricia

    2014-04-01

    The purpose of this article is to describe a conceptual model of methods used to develop culturally focused interventions. We describe a continuum of approaches ranging from non-adapted/surface-structure adapted programs to culturally grounded programs, and present recent examples of interventions resulting from the application of each of these approaches. The model has implications for categorizing culturally focused prevention efforts more accurately, and for gauging the time, resources, and level of community engagement necessary to develop programs using each of the different methods. The model also has implications for funding decisions related to the development and evaluation of programs, and for planning of participatory research approaches with community members.

  2. An adaptive management approach to controlling suburban deer

    USGS Publications Warehouse

    Nielson, C.K.; Porter, W.F.; Underwood, H.B.

    1997-01-01

    Distance sight-resight sampling has particular relevance to aerial surveys, in which height above ground and aircraft speed make the critical assumption of certain detection on the track-line unrealistic. Recent developments in distance sight-resight theory have left practical issues related to data collection as the major impediment to widespread use of distance sight-resight sampling in aerial surveys. We describe and evaluate a system to automatically log, store, and process data from distance sight-resight aerial surveys. The system has a primary digital system and a secondary audio system. The digital system comprises a sighting 'gun' and small keypad for each observer, a global positioning system (GPS) receiver, and an altimeter interface, all linked to a central laptop computer. The gun is used to record time and angle of declination from the horizon of sighted groups of animals as they pass the aircraft. The keypad is used to record information on species and group size. The altimeter interface records altitude from the aircraft's radar altimeter, and the GPS receiver provides location data at user-definable intervals. We wrote software to import data into a database and convert it into a form appropriate for distance sight-resight analyses. Perpendicular distance of sighted groups of animals from the flight path is calculated from altitude and angle of declination. Time, angle of declination, species, and group size of sightings by independent observers on the same side of the aircraft are used as criteria to classify single and duplicate sightings, allowing testing of the critical distance sampling assumption (g(0)=1) and estimation of g(0) if that assumption fails. An audio system comprising headphones for each observer and a 4-track tape recorder allows recording of data that are difficult to accommodate in the digital system and provides a backup to the digital system. We evaluated the system by conducting experimental surveys and reviewing results

  3. Risk assessment of nanomaterials and nanoproducts - adaptation of traditional approaches

    NASA Astrophysics Data System (ADS)

    Jahnel, J.; Fleischer, T.; Seitz, S. B.

    2013-04-01

    Different approaches have been adopted for assessing the potential risks of conventional chemicals and products for human health. In general, the traditional paradigm is a toxicological-driven chemical-by-chemical approach, focusing on single toxic endpoints. Scope and responsibilities for the development and implementation of a risk assessment concept vary across sectors and areas and depends on the specific regulatory environment and the specific protection goals. Thus, risk assessment implication is a complex task based not only on science based knowledge but also on the regulatory context involving different parties and stakeholders. Questions have been raised whether standard paradigms for conventional chemicals would be applicable and adequate for new materials, products and applications of nanotechnology. Most scientists and stakeholders assume that current standard methods are in principle applicable to nanomaterials, but specific aspects require further development. The paper presents additional technical improvements like the complementary use of the life cycle methodology and the support of risk-based classification systems. But also aspects improving the utility of risk assessment with regard to societal impacts on risk governance are discussed.

  4. Farms adaptation to changes in flood risk: a management approach

    NASA Astrophysics Data System (ADS)

    Pivot, Jean-Marc; Martin, Philippe

    2002-10-01

    Creating flood expansion areas e.g. for the protection of urban areas from flooding involves a localised increase in risk which may require farmers to be compensated for crop damage or other losses. With this in mind, the paper sets out the approach used to study the problem and gives results obtained from a survey of farms liable to flooding in central France. The approach is based on a study of decisions made by farmers in situations of uncertainty, using the concept of 'model of action'. The results show that damage caused to farming areas by flooding should be considered both at field level and at farm level. The damage caused to the field depends on the flood itself, the fixed characteristics of the field, and the plant species cultivated. However, the losses to the farm taken as a whole can differ considerably from those for the flooded field, due to 'knock-on' effects on farm operations which depend on the internal organization, the availability of production resources, and the farmer's objectives, both for the farm as a whole and for its individual enterprises. Three main strategies regarding possible flood events were identified. Reasons for choosing one of these include the way the farmer perceives the risk and the size of the area liable to flooding. Finally, the formalisation of farm system management in the face of uncertainty, especially due to flooding, enables compensation to be calculated for farmers whose land is affected by the creation of flood expansion areas.

  5. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  6. Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation

    NASA Technical Reports Server (NTRS)

    Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.

    2004-01-01

    Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.

  7. Writing Samples Viewed from Different Perspectives: An Approach to Validity.

    ERIC Educational Resources Information Center

    Carlson, Sybil B.

    The objective description and identification of variables that meaningfully distinguish reasoning skills couched in written discourse was studied, comparing scores obtained from different perspectives on the same writing samples. A total of 406 writing samples on 2 topics by 203 students who had taken the Graduate Record Examinations, mostly…

  8. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  9. The Application of Adaptive Sampling and Analysis Program (ASAP) Techniques to NORM Sites

    SciTech Connect

    Johnson, Robert; Smith, Karen P.; Quinn, John

    1999-10-29

    The results from the Michigan demonstration establish that this type of approach can be very effective for NORM sites. The advantages include (1) greatly reduced per sample analytical costs; (2) a reduced reliance on soil sampling and ex situ gamma spectroscopy analyses; (3) the ability to combine characterization with remediation activities in one fieldwork cycle; (4) improved documentation; and (5) ultimately better remediation, as measured by greater precision in delineating soils that are not in compliance with requirements from soils that are in compliance. In addition, the demonstration showed that the use of real-time technologies, such as the RadInSoil, can facilitate the implementation of a Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM)-based final status survey program

  10. Land-based approach to evaluate sustainable land management and adaptive capacity of ecosystems/lands

    NASA Astrophysics Data System (ADS)

    Kust, German; Andreeva, Olga

    2015-04-01

    A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach

  11. Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach

    PubMed Central

    2014-01-01

    Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population

  12. The Colorado Climate Preparedness Project: A Systematic Approach to Assessing Efforts Supporting State-Level Adaptation

    NASA Astrophysics Data System (ADS)

    Klein, R.; Gordon, E.

    2010-12-01

    Scholars and policy analysts often contend that an effective climate adaptation strategy must entail "mainstreaming," or incorporating responses to possible climate impacts into existing planning and management decision frameworks. Such an approach, however, makes it difficult to assess the degree to which decisionmaking entities are engaging in adaptive activities that may or may not be explicitly framed around a changing climate. For example, a drought management plan may not explicitly address climate change, but the activities and strategies outlined in it may reduce vulnerabilities posed by a variable and changing climate. Consequently, to generate a strategic climate adaptation plan requires identifying the entire suite of activities that are implicitly linked to climate and may affect adaptive capacity within the system. Here we outline a novel, two-pronged approach, leveraging social science methods, to understanding adaptation throughout state government in Colorado. First, we conducted a series of interviews with key actors in state and federal government agencies, non-governmental organizations, universities, and other entities engaged in state issues. The purpose of these interviews was to elicit information about current activities that may affect the state’s adaptive capacity and to identify future climate-related needs across the state. Second, we have developed an interactive database cataloging organizations, products, projects, and people actively engaged in adaptive planning and policymaking that are relevant to the state of Colorado. The database includes a wiki interface, helping create a dynamic component that will enable frequent updating as climate-relevant information emerges. The results of this project are intended to paint a clear picture of sectors and agencies with higher and lower levels of adaptation awareness and to provide a roadmap for the next gubernatorial administration to pursue a more sophisticated climate adaptation agenda

  13. Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2012-01-01

    The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…

  14. 120nm resolution in thick samples with structured illumination and adaptive optics

    NASA Astrophysics Data System (ADS)

    Thomas, Benjamin; Sloan, Megan; Wolstenholme, Adrian J.; Kner, Peter

    2014-03-01

    μLinear Structured Illumination Microscopy (SIM) provides a two-fold increase over the diffraction limited resolution. SIM produces excellent images with 120nm resolution in tissue culture cells in two and three dimensions. For SIM to work correctly, the point spread function (PSF) and optical transfer function (OTF) must be known, and, ideally, should be unaberrated. When imaging through thick samples, aberrations will be introduced into the optical system which will reduce the peak intensity and increase the width of the PSF. This will lead to reduced resolution and artifacts in SIM images. Adaptive optics can be used to correct the optical wavefront restoring the PSF to its unaberrated state, and AO has been used in several types of fluorescence microscopy. We demonstrate that AO can be used with SIM to achieve 120nm resolution through 25m of tissue by imaging through the full thickness of an adult C. elegans roundworm. The aberrations can be corrected over a 25μm × 45μm field of view with one wavefront correction setting, demonstrating that AO can be used effectively with widefield superresolution techniques.

  15. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  16. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. Experimental Approaches to Microarray Analysis of Tumor Samples

    ERIC Educational Resources Information Center

    Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.

    2008-01-01

    Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…

  18. An enhanced adaptive management approach for remediation of legacy mercury in the South River.

    PubMed

    Foran, Christy M; Baker, Kelsie M; Grosso, Nancy R; Linkov, Igor

    2015-01-01

    Uncertainties about future conditions and the effects of chosen actions, as well as increasing resource scarcity, have been driving forces in the utilization of adaptive management strategies. However, many applications of adaptive management have been criticized for a number of shortcomings, including a limited ability to learn from actions and a lack of consideration of stakeholder objectives. To address these criticisms, we supplement existing adaptive management approaches with a decision-analytical approach that first informs the initial selection of management alternatives and then allows for periodic re-evaluation or phased implementation of management alternatives based on monitoring information and incorporation of stakeholder values. We describe the application of this enhanced adaptive management (EAM) framework to compare remedial alternatives for mercury in the South River, based on an understanding of the loading and behavior of mercury in the South River near Waynesboro, VA. The outcomes show that the ranking of remedial alternatives is influenced by uncertainty in the mercury loading model, by the relative importance placed on different criteria, and by cost estimates. The process itself demonstrates that a decision model can link project performance criteria, decision-maker preferences, environmental models, and short- and long-term monitoring information with management choices to help shape a remediation approach that provides useful information for adaptive, incremental implementation.

  19. Evaluating adaptive governance approaches to sustainable water management in north-west Thailand.

    PubMed

    Clark, Julian R A; Semmahasak, Chutiwalanch

    2013-04-01

    Adaptive governance is advanced as a potent means of addressing institutional fit of natural resource systems with prevailing modes of political-administrative management. Its advocates also argue that it enhances participatory and learning opportunities for stakeholders over time. Yet an increasing number of studies demonstrate real difficulties in implementing adaptive governance 'solutions'. This paper builds on these debates by examining the introduction of adaptive governance to water management in Chiang Mai province, north-west Thailand. The paper considers, first, the limitations of current water governance modes at the provincial scale, and the rationale for implementation of an adaptive approach. The new approach is then critically examined, with its initial performance and likely future success evaluated by (i) analysis of water stakeholders' opinions of its first year of operation; and (ii) comparison of its governance attributes against recent empirical accounts of implementation difficulty and failure of adaptive governance of natural resource management more generally. The analysis confirms the potentially significant role that the new approach can play in brokering and resolving the underlying differences in stakeholder representation and knowledge construction at the heart of the prevailing water governance modes in north-west Thailand.

  20. Kinetic Boltzmann approach adapted for modeling highly ionized matter created by x-ray irradiation of a solid

    NASA Astrophysics Data System (ADS)

    Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal

    2016-05-01

    We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed.

  1. Kinetic Boltzmann approach adapted for modeling highly ionized matter created by x-ray irradiation of a solid.

    PubMed

    Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal

    2016-05-01

    We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed. PMID:27300998

  2. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  3. A Time-Critical Adaptive Approach for Visualizing Natural Scenes on Different Devices

    PubMed Central

    Dong, Tianyang; Liu, Siyuan; Xia, Jiajia; Fan, Jing; Zhang, Ling

    2015-01-01

    To automatically adapt to various hardware and software environments on different devices, this paper presents a time-critical adaptive approach for visualizing natural scenes. In this method, a simplified expression of a tree model is used for different devices. The best rendering scheme is intelligently selected to generate a particular scene by estimating the rendering time of trees based on their visual importance. Therefore, this approach can ensure the reality of natural scenes while maintaining a constant frame rate for their interactive display. To verify its effectiveness and flexibility, this method is applied in different devices, such as a desktop computer, laptop, iPad and smart phone. Applications show that the method proposed in this paper can not only adapt to devices with different computing abilities and system resources very well but can also achieve rather good visual realism and a constant frame rate for natural scenes. PMID:25723177

  4. Unit dose sampling and final product performance: an alternative approach.

    PubMed

    Geoffroy, J M; Leblond, D; Poska, R; Brinker, D; Hsu, A

    2001-08-01

    This article documents a proposed plan for validation testing for the content uniformity for final blends and finished solid oral dosage forms (SODFs). The testing logic and statistical justification of the plan are presented. The plan provides good assurance that a passing lot will perforin well against the USP tablet content uniformity test. The operating characteristics of the test and the probability of needing to test for blend sampling bias are reported. A case study is presented.

  5. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  6. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    SciTech Connect

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  7. Complexity Thinking in PE: Game-Centred Approaches, Games as Complex Adaptive Systems, and Ecological Values

    ERIC Educational Resources Information Center

    Storey, Brian; Butler, Joy

    2013-01-01

    Background: This article draws on the literature relating to game-centred approaches (GCAs), such as Teaching Games for Understanding, and dynamical systems views of motor learning to demonstrate a convergence of ideas around games as complex adaptive learning systems. This convergence is organized under the title "complexity thinking"…

  8. An Enhanced Approach to Combine Item Response Theory with Cognitive Diagnosis in Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Zheng, Chanjin; Chang, Hua-Hua

    2014-01-01

    Computerized adaptive testing offers the possibility of gaining information on both the overall ability and cognitive profile in a single assessment administration. Some algorithms aiming for these dual purposes have been proposed, including the shadow test approach, the dual information method (DIM), and the constraint weighted method. The…

  9. Development of an Assistance Environment for Tutors Based on a Co-Adaptive Design Approach

    ERIC Educational Resources Information Center

    Lavoue, Elise; George, Sebastien; Prevot, Patrick

    2012-01-01

    In this article, we present a co-adaptive design approach named TE-Cap (Tutoring Experience Capitalisation) that we applied for the development of an assistance environment for tutors. Since tasks assigned to tutors in educational contexts are not well defined, we are developing an environment which responds to needs which are not precisely…

  10. Three Authentic Curriculum-Integration Approaches to Bird Adaptations That Incorporate Technology and Thinking Skills

    ERIC Educational Resources Information Center

    Rule, Audrey C.; Barrera, Manuel T., III

    2008-01-01

    Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…

  11. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  12. ASICs Approach for the Implementation of a Symmetric Triangular Fuzzy Coprocessor and Its Application to Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan

    1997-01-01

    This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.

  13. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  14. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  15. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer.

  16. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM. PMID:24610145

  17. The role of adaptive management as an operational approach for resource management agencies

    USGS Publications Warehouse

    Johnson, B.L.

    1999-01-01

    In making resource management decisions, agencies use a variety of approaches that involve different levels of political concern, historical precedence, data analyses, and evaluation. Traditional decision-making approaches have often failed to achieve objectives for complex problems in large systems, such as the Everglades or the Colorado River. I contend that adaptive management is the best approach available to agencies for addressing this type of complex problem, although its success has been limited thus far. Traditional decision-making approaches have been fairly successful at addressing relatively straightforward problems in small, replicated systems, such as management of trout in small streams or pulp production in forests. However, this success may be jeopardized as more users place increasing demands on these systems. Adaptive management has received little attention from agencies for addressing problems in small-scale systems, but I suggest that it may be a useful approach for creating a holistic view of common problems and developing guidelines that can then be used in simpler, more traditional approaches to management. Although adaptive management may be more expensive to initiate than traditional approaches, it may be less expensive in the long run if it leads to more effective management. The overall goal of adaptive management is not to maintain an optimal condition of the resource, but to develop an optimal management capacity. This is accomplished by maintaining ecological resilience that allows the system to react to inevitable stresses, and generating flexibility in institutions and stakeholders that allows managers to react when conditions change. The result is that, rather than managing for a single, optimal state, we manage within a range of acceptable outcomes while avoiding catastrophes and irreversible negative effects. Copyright ?? 1999 by The Resilience Alliance.

  18. Some Features of the Sampling Distribution of the Ability Estimate in Computerized Adaptive Testing According to Two Stopping Rules.

    ERIC Educational Resources Information Center

    Blais, Jean-Guy; Raiche, Gilles

    This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…

  19. The Parent Version of the Preschool Social Skills Rating System: Psychometric Analysis and Adaptation with a German Preschool Sample

    ERIC Educational Resources Information Center

    Hess, Markus; Scheithauer, Herbert; Kleiber, Dieter; Wille, Nora; Erhart, Michael; Ravens-Sieberer, Ulrike

    2014-01-01

    The Social Skills Rating System (SSRS) developed by Gresham and Elliott (1990) is a multirater, norm-referenced instrument measuring social skills and adaptive behavior in preschool children. The aims of the present study were (a) to test the factorial structure of the Parent Form of the SSRS for the first time with a German preschool sample (391…

  20. An adaptive online learning approach for Support Vector Regression: Online-SVR-FID

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zio, Enrico

    2016-08-01

    Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.

  1. Station-keeping control for a stratospheric airship platform via fuzzy adaptive backstepping approach

    NASA Astrophysics Data System (ADS)

    Yang, Yueneng; Wu, Jie; Zheng, Wei

    2013-04-01

    This paper presents a novel approach for station-keeping control of a stratospheric airship platform in the presence of parametric uncertainty and external disturbance. First, conceptual design of the stratospheric airship platform is introduced, including the target mission, configuration, energy sources, propeller and payload. Second, the dynamics model of the airship platform is presented, and the mathematical model of its horizontal motion is derived. Third, a fuzzy adaptive backstepping control approach is proposed to develop the station-keeping control system for the simplified horizontal motion. The backstepping controller is designed assuming that the airship model is accurately known, and a fuzzy adaptive algorithm is used to approximate the uncertainty of the airship model. The stability of the closed-loop control system is proven via the Lyapunov theorem. Finally, simulation results illustrate the effectiveness and robustness of the proposed control approach.

  2. A Continuum of Approaches Toward Developing Culturally Focused Prevention Interventions: From Adaptation to Grounding

    PubMed Central

    Okamoto, Scott K.; Kulis, Stephen; Marsiglia, Flavio F.; Holleran Steiker, Lori K.; Dustman, Patricia

    2014-01-01

    The purpose of this article is to describe a conceptual model of methods used to develop culturally focused interventions. We describe a continuum of approaches ranging from nonadapted/surface-structure adapted programs to culturally grounded programs, and present recent examples of interventions resulting from the application of each of these approaches. The model has implications for categorizing culturally focused prevention efforts more accurately, and for gauging the time, resources, and level of community engagement necessary to develop programs using each of the different methods. The model also has implications for funding decisions related to the development and evaluation of programs, and for planning of participatory research approaches with community members. PMID:24322970

  3. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  4. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP

    PubMed Central

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  5. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP.

    PubMed

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-06-04

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented.

  6. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP.

    PubMed

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  7. A new approach for designing self-organizing systems and application to adaptive control

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Zhang, Shi; Lin, Yueqing; Huang, Song

    1993-01-01

    There is tremendous interest in the design of intelligent machines capable of autonomous learning and skillful performance under complex environments. A major task in designing such systems is to make the system plastic and adaptive when presented with new and useful information and stable in response to irrelevant events. A great body of knowledge, based on neuro-physiological concepts, has evolved as a possible solution to this problem. Adaptive resonance theory (ART) is a classical example under this category. The system dynamics of an ART network is described by a set of differential equations with nonlinear functions. An approach for designing self-organizing networks characterized by nonlinear differential equations is proposed.

  8. Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach

    SciTech Connect

    Perez, Danny; Vernon, Louis J.

    2012-04-04

    Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are

  9. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  10. Prediction of contact forces of underactuated finger by adaptive neuro fuzzy approach

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Abbasi, Almas; Kiani, Kourosh; Al-Shammari, Eiman Tamah

    2015-12-01

    To obtain adaptive finger passive underactuation can be used. Underactuation principle can be used to adapt shapes of the fingers for grasping objects. The fingers with underactuation do not require control algorithm. In this study a kinetostatic model of the underactuated finger mechanism was analyzed. The underactuation is achieved by adding the compliance in every finger joint. Since the contact forces of the finger depend on contact position of the finger and object, it is suitable to make a prediction model for the contact forces in function of contact positions of the finger and grasping objects. In this study prediction of the contact forces was established by a soft computing approach. Adaptive neuro-fuzzy inference system (ANFIS) was applied as the soft computing method to perform the prediction of the finger contact forces.

  11. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim.

  12. New approach based on compressive sampling for sample rate enhancement in DASs for low-cost sensing nodes.

    PubMed

    Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele

    2014-10-13

    The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  13. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    PubMed Central

    Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele

    2014-01-01

    The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate. PMID:25313493

  14. A human factors approach to adapted access device prescription and customization.

    PubMed

    August, S; Weiss, P L

    1992-01-01

    Adapted access device prescription and customization is often a lengthy and cumbersome process. To date, few objective procedures are available to assist in the prescription process. Rather, clinician and client rely on a trial-and-error approach that is often severely constrained by the size of their adaptive device collection as well as the extent of clinical expertise. Furthermore, the large number of available options and lack of information delineating the mechanical and physical characteristics of these devices means that therapists must take time away from direct clinical contact to probe each adaptation in detail. There is available in the human factors domain a body of literature that is highly relevant to adapted access. Of particular interest are the studies that have addressed issues related to the suitability of standard and alternative input devices in terms of task productivity (via improvements in input speed, accuracy, and endurance), and their ability to minimize the risk of acute and chronic work-related dysfunction. This paper aims to consider the relevance of human factors research for physically disabled individuals. Three human factors issues--digit travel, digit loading, and device positioning--have been selected as representative of factors important in the configuration of adapted access devices.

  15. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  16. Application of combined morphological-molecular approaches to the identification of planktonic protists from environmental samples.

    PubMed

    Duff, Robert Joel; Ball, Hope; Lavrentyev, Peter J

    2008-01-01

    The value of molecular databases for unicellular eukaryotic identification and phylogenetic reconstruction is predicated on the availability of sequences and accuracy of taxonomic identifications that accompany those sequences. Biased representation of sequences is due in part to the differing ability to isolate and culture various groups of protists. Techniques that allow for parallel single-cell morphological and molecular identifications have been reported for a few groups of unicellular protists. We have sought to explore how those techniques can be adapted to work across a greater phylogenetic diversity of taxa. Twelve morphologically diverse and abundant members of limnetic microplankton, including ciliates, dinoflagellates, cryptophytes, stramenopiles, and synurophytes, were targeted for analysis. These cells were captured directly from environmental samples, identified, and prepared for sequence analyses using variations of single-cell extraction techniques depending on their size, mobility, and the absence or presence of the cell wall. The application of these techniques yielded a strong congruence between the morphological and molecular identifications of the targeted taxa. Challenges to the single-cell approach in some groups are discussed. The general ability to obtain DNA sequences and morphological descriptions from individual cells should open new avenues to studying either rare or difficult to culture taxa, even directly at the point of collection (e.g. remote locations or shipboard).

  17. Approaches to evaluating climate change impacts on species: a guide to initiating the adaptation planning process.

    PubMed

    Rowland, Erika L; Davison, Jennifer E; Graumlich, Lisa J

    2011-03-01

    Assessing the impact of climate change on species and associated management objectives is a critical initial step for engaging in the adaptation planning process. Multiple approaches are available. While all possess limitations to their application associated with the uncertainties inherent in the data and models that inform their results, conducting and incorporating impact assessments into the adaptation planning process at least provides some basis for making resource management decisions that are becoming inevitable in the face of rapidly changing climate. Here we provide a non-exhaustive review of long-standing (e.g., species distribution models) and newly developed (e.g., vulnerability indices) methods used to anticipate the response to climate change of individual species as a guide for managers grappling with how to begin the climate change adaptation process. We address the limitations (e.g., uncertainties in climate change projections) associated with these methods, and other considerations for matching appropriate assessment approaches with the management questions and goals. Thorough consideration of the objectives, scope, scale, time frame and available resources for a climate impact assessment allows for informed method selection. With many data sets and tools available on-line, the capacity to undertake and/or benefit from existing species impact assessments is accessible to those engaged in resource management. With some understanding of potential impacts, even if limited, adaptation planning begins to move toward the development of management strategies and targeted actions that may help to sustain functioning ecosystems and their associated services into the future.

  18. An implicit and adaptive nonlinear frequency domain approach for periodic viscous flows

    NASA Astrophysics Data System (ADS)

    Mosahebi, A.; Nadarajah, S.

    2014-12-01

    An implicit nonlinear Lower-Upper symmetric Gauss-Seidel (LU-SGS) solver has been extended to the adaptive Nonlinear Frequency Domain method (adaptive NLFD) for periodic viscous flows. The discretized equations are linearized in both spatial and temporal directions, yielding an innovative segregate approach, where the effects of the neighboring cells are transferred to the right-hand-side and are updated iteratively. This property of the solver is aligned with the adaptive NLFD concept, in which different cells have different number of modes; hence, should be treated individually. The segregate analysis of the modal equations prevents assembling and inversion of a large left-hand-side matrix, when high number of modes are involved. This is an important characteristic for a selected flow solver of the adaptive NLFD method, where a high modal content may be required in highly unsteady parts of the flow field. The implicit nonlinear LU-SGS solver has demonstrated to be both robust and computationally efficient as the number of modes is increased. The developed solver is thoroughly validated for the laminar vortex shedding behind a stationary cylinder, high angle of attack NACA0012 airfoil, and a plunging NACA0012 airfoil. An order of magnitude improvement in the computational time is observed through the developed implicit approach over the classical modified 5-stage Runge-Kutta method.

  19. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  20. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  1. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  2. An Adaptive Particle Filtering Approach to Tracking Modes in a Varying Shallow Ocean Environment

    SciTech Connect

    Candy, J V

    2011-03-22

    The shallow ocean environment is ever changing mostly due to temperature variations in its upper layers (< 100m) directly affecting sound propagation throughout. The need to develop processors that are capable of tracking these changes implies a stochastic as well as an 'adaptive' design. The stochastic requirement follows directly from the multitude of variations created by uncertain parameters and noise. Some work has been accomplished in this area, but the stochastic nature was constrained to Gaussian uncertainties. It has been clear for a long time that this constraint was not particularly realistic leading a Bayesian approach that enables the representation of any uncertainty distribution. Sequential Bayesian techniques enable a class of processors capable of performing in an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean. In this paper adaptive processors providing enhanced signals for acoustic hydrophonemeasurements on a vertical array as well as enhanced modal function estimates are developed. Synthetic data is provided to demonstrate that this approach is viable.

  3. A unique approach to the development of adaptive sensor systems for future spacecraft

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Tietz, J. C.; Sivertson, W. E.; Wilson, R. G.

    1979-01-01

    In the Shuttle era, it should be possible to develop adaptive remote sensor systems serving more directly specific researcher and user needs and at the same time alleviating the data management problem via intelligent sensor capabilities. The present paper provides a summary of such an approach, wherein specific capabilities have been developed for future global monitoring applications. A detailed description of FILE-I (Feature Identification and Location Experiment) is included along with a summary of future experiments currently under development.

  4. Performance Monitoring and Assessment of Neuro-Adaptive Controllers for Aerospace Applications Using a Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Guenther, Kurt; Hodgkinson, John; Jacklin, Stephen; Richard, Michael; Schumann, Johann; Soares, Fola

    2005-01-01

    Modern exploration missions require modern control systems-control systems that can handle catastrophic changes in the system's behavior, compensate for slow deterioration in sustained operations, and support fast system ID. Adaptive controllers, based upon Neural Networks have these capabilities, but they can only be used safely if proper verification & validation (V&V) can be done. In this paper we present our V & V approach and simulation result within NASA's Intelligent Flight Control Systems (IFCS).

  5. Development of an Abbreviated Form of the Penn Line Orientation Test Using Large Samples and Computerized Adaptive Test Simulation

    PubMed Central

    Moore, Tyler M.; Scott, J. Cobb; Reise, Steven P.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Gur, Raquel E.; Gur, Ruben C.

    2015-01-01

    Visuospatial processing is a commonly assessed neurocognitive domain, with deficits linked to dysfunction in right posterior regions of the brain. With the growth of large-scale clinical research studies there is an increased need for efficient and scalable assessments of neurocognition, including visuospatial processing. The purpose of the current study was to use a novel method that combines item response theory (IRT) and computerized adaptive testing (CAT) approaches to create an abbreviated form of the computerized Penn Line Orientation Test (PLOT). The 24-item PLOT was administered to 8,498 youths (aged 8 to 21) as part of the Philadelphia Neurodevelopmental Cohort study and, by web-based data collection, in an independent sample of 4,593 adults from Great Britain as part of a television documentary. IRT-based CAT simulations were used to select the best PLOT items for an abbreviated form by performing separate simulations in each group and choosing only items that were selected as useful (i.e., high item discrimination and in the appropriate difficulty range) in at least one of the simulations. Fifteen items were chosen for the final, short form of the PLOT, indicating substantial agreement among the models in how they evaluated each item's usefulness. Moreover, this abbreviated version performed comparably to the full version in tests of sensitivity to age and sex effects. This abbreviated version of the PLOT cuts administration time by 50% without detectable loss of information, which points to its feasibility for large-scale clinical and genomic studies. PMID:25822834

  6. Psychometric Properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD Sample

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Miller, Mark W.

    2011-01-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the…

  7. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  8. Reducing False Negative Reads in RFID Data Streams Using an Adaptive Sliding-Window Approach

    PubMed Central

    Massawe, Libe Valentine; Kinyua, Johnson D. M.; Vermaak, Herman

    2012-01-01

    Unreliability of the data streams generated by RFID readers is among the primary factors which limit the widespread adoption of the RFID technology. RFID data cleaning is, therefore, an essential task in the RFID middleware systems in order to reduce reading errors, and to allow these data streams to be used to make a correct interpretation and analysis of the physical world they are representing. In this paper we propose an adaptive sliding-window based approach called WSTD which is capable of efficiently coping with both environmental variation and tag dynamics. Our experimental results demonstrate the efficacy of the proposed approach. PMID:22666027

  9. Adapting hydrological model structure to catchment characteristics: A large-sample experiment

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Clark, Martyn P.; Nijssen, Bart

    2016-04-01

    Current hydrological modeling frameworks do not offer a clear way to systematically investigate the relationship between model complexity and model fidelity. The characterization of this relationship has so far relied on comparisons of different modules within the same model or comparisons of entirely different models. This lack of granularity in the differences between the model constructs makes it difficult to pinpoint model features that contribute to good simulations and means that the number of models or modeling hypotheses evaluated is usually small. Here we use flexible modeling frameworks to comprehensively and systematically compare modeling alternatives across the continuum of model complexity. A key goal is to explore which model structures are most adequate for catchments in different hydroclimatic conditions. Starting from conceptual models based on the Framework for Understanding Structural Errors (FUSE), we progressively increase model complexity by replacing conceptual formulations by physically explicit ones (process complexity) and by refining model spatial resolution (spatial complexity) using the newly developed Structure for Unifying Multiple Modeling Alternatives (SUMMA). To investigate how to best reflect catchment characteristics using model structure, we rely on a recently released data set of 671 catchments in the continuous United States. Instead of running hydrological simulations in every catchment, we use clustering techniques to define catchment clusters, run hydrological simulations for representative members of each cluster, develop hypotheses (e.g., when specific process representations have useful explanatory power) and test these hypotheses using other members of the cluster. We thus refine our catchment clustering based on insights into dominant hydrological processes gained from our modeling approach. With this large-sample experiment, we seek to uncover trade-offs between realism and practicality, and formulate general

  10. A decision analysis approach to climate adaptation: comparing multiple pathways for multi-decadal decision making

    NASA Astrophysics Data System (ADS)

    Lin, B. B.; Little, L.

    2013-12-01

    Policy planners around the world are required to consider the implications of adapting to climatic change across spatial contexts and decadal timeframes. However, local level information for planning is often poorly defined, even though climate adaptation decision-making is made at this scale. This is especially true when considering sea level rise and coastal impacts of climate change. We present a simple approach using sea level rise simulations paired with adaptation scenarios to assess a range of adaptation options available to local councils dealing with issues of beach recession under present and future sea level rise and storm surge. Erosion and beach recession pose a large socioeconomic risk to coastal communities because of the loss of key coastal infrastructure. We examine the well-known adaptation technique of beach nourishment and assess various timings and amounts of beach nourishment at decadal time spans in relation to beach recession impacts. The objective was to identify an adaptation strategy that would allow for a low frequency of management interventions, the maintenance of beach width, and the ability to minimize variation in beach width over the 2010 to 2100 simulation period. 1000 replications of each adaptation option were produced against the 90 year simulation in order to model the ability each adaptation option to achieve the three key objectives. Three sets of adaptation scenarios were identified. Within each scenario, a number of adaptation options were tested. The three scenarios were: 1) Fixed periodic beach replenishment of specific amounts at 20 and 50 year intervals, 2) Beach replenishment to the initial beach width based on trigger levels of recession (5m, 10m, 20m), and 3) Fixed period beach replenishment of a variable amount at decadal intervals (every 10, 20, 30, 40, 50 years). For each adaptation option, we show the effectiveness of each beach replenishment scenario to maintain beach width and consider the implications of more

  11. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  12. Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs in Evaluations

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2013-01-01

    The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…

  13. RECRUITING FOR A LONGITUDINAL STUDY OF CHILDREN'S HEALTH USING A HOUSEHOLD-BASED PROBABILITY SAMPLING APPROACH

    EPA Science Inventory

    The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...

  14. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, Edward F.; Kennedy, Gregory W.; Craig, Jaquelyn; Boase, James; Soper, Karen

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8 m) Detroit River using an anchor and buoy system.

  15. A New Approach to Interference Excision in Radio Astronomy: Real-Time Adaptive Cancellation

    NASA Astrophysics Data System (ADS)

    Barnbaum, Cecilia; Bradley, Richard F.

    1998-11-01

    Every year, an increasing amount of radio-frequency (RF) spectrum in the VHF, UHF, and microwave bands is being utilized to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Such services already cause problems for radio astronomy even in very remote observing sites, and the potential for this form of light pollution to grow is alarming. Preventive measures to eliminate interference through FCC legislation and ITU agreements can be effective; however, many times this approach is inadequate and interference excision at the receiver is necessary. Conventional techniques such as RF filters, RF shielding, and postprocessing of data have been only somewhat successful, but none has been sufficient. Adaptive interference cancellation is a real-time approach to interference excision that has not been used before in radio astronomy. We describe here, for the first time, adaptive interference cancellation in the context of radio astronomy instrumentation, and we present initial results for our prototype receiver. In the 1960s, analog adaptive interference cancelers were developed that obtain a high degree of cancellation in problems of radio communications and radar. However, analog systems lack the dynamic range, noised performance, and versatility required by radio astronomy. The concept of digital adaptive interference cancellation was introduced in the mid-1960s as a way to reduce unwanted noise in low-frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartments of automobiles. These audio-frequency applications require bandwidths of only a few tens of kilohertz. Only recently has high-speed digital filter technology made high dynamic range adaptive canceling possible in a bandwidth as large as a few megahertz, finally opening the door to application in radio astronomy. We have

  16. An optimal dynamic inversion-based neuro-adaptive approach for treatment of chronic myelogenous leukemia.

    PubMed

    Padhi, Radhakant; Kothari, Mangal

    2007-09-01

    Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

  17. An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Malano, H. M.; Davidson, B.; George, B.

    2014-12-01

    Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture

  18. A problem-oriented approach to understanding adaptation: lessons learnt from Alpine Shire, Victoria Australia.

    NASA Astrophysics Data System (ADS)

    Roman, Carolina

    2010-05-01

    Climate change is gaining attention as a significant strategic issue for localities that rely on their business sectors for economic viability. For businesses in the tourism sector, considerable research effort has sought to characterise the vulnerability to the likely impacts of future climate change through scenarios or ‘end-point' approaches (Kelly & Adger, 2000). Whilst useful, there are few demonstrable case studies that complement such work with a ‘start-point' approach that seeks to explore contextual vulnerability (O'Brien et al., 2007). This broader approach is inclusive of climate change as a process operating within a biophysical system and allows recognition of the complex interactions that occur in the coupled human-environmental system. A problem-oriented and interdisciplinary approach was employed at Alpine Shire, in northeast Victoria Australia, to explore the concept of contextual vulnerability and adaptability to stressors that include, but are not limited to climatic change. Using a policy sciences approach, the objective was to identify factors that influence existing vulnerabilities and that might consequently act as barriers to effective adaptation for the Shire's business community involved in the tourism sector. Analyses of results suggest that many threats, including the effects climate change, compete for the resources, strategy and direction of local tourism management bodies. Further analysis of conditioning factors revealed that many complex and interacting factors define the vulnerability and adaptive capacity of the Shire's tourism sector to the challenges of global change, which collectively have more immediate implications for policy and planning than long-term future climate change scenarios. An approximation of the common interest, i.e. enhancing capacity in business acumen amongst tourism operators, would facilitate adaptability and sustainability through the enhancement of social capital in this business community. Kelly, P

  19. Evaluation of endoscopically obtained duodenal biopsy samples from cats and dogs in an adapter-modified Ussing chamber

    PubMed Central

    DeBiasio, John V.; Suchodolski, Jan S.; Newman, Shelley; Musch, Mark W.; Steiner, Jörg M.

    2014-01-01

    This study was conducted to evaluate an adapter-modified Ussing chamber for assessment of transport physiology in endoscopically obtained duodenal biopsies from healthy cats and dogs, as well as dogs with chronic enteropathies. 17 duodenal biopsies from five cats and 51 duodenal biopsies from 13 dogs were obtained. Samples were transferred into an adapter-modified Ussing chamber and sequentially exposed to various absorbagogues and secretagogues. Overall, 78.6% of duodenal samples obtained from cats responded to at least one compound. In duodenal biopsies obtained from dogs, the rate of overall response ranged from 87.5% (healthy individuals; n = 8), to 63.6% (animals exhibiting clinical signs of gastrointestinal disease and histopathological unremarkable duodenum; n = 15), and 32.1% (animals exhibiting clinical signs of gastrointestinal diseases and moderate to severe histopathological lesions; n = 28). Detailed information regarding the magnitude and duration of the response are provided. The adapter-modified Ussing chamber enables investigation of the absorptive and secretory capacity of endoscopically obtained duodenal biopsies from cats and dogs and has the potential to become a valuable research tool. The response of samples was correlated with histopathological findings. PMID:24378587

  20. Towards a System Level Understanding of Non-Model Organisms Sampled from the Environment: A Network Biology Approach

    PubMed Central

    Williams, Tim D.; Turan, Nil; Diab, Amer M.; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L.; Hrydziuszko, Olga; Lyons, Brett P.; Stentiford, Grant D.; Herbert, John M.; Abraham, Joseph K.; Katsiadaki, Ioanna; Leaver, Michael J.; Taggart, John B.; George, Stephen G.; Viant, Mark R.; Chipman, Kevin J.; Falciani, Francesco

    2011-01-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations. PMID:21901081

  1. Towards a system level understanding of non-model organisms sampled from the environment: a network biology approach.

    PubMed

    Williams, Tim D; Turan, Nil; Diab, Amer M; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L; Hrydziuszko, Olga; Lyons, Brett P; Stentiford, Grant D; Herbert, John M; Abraham, Joseph K; Katsiadaki, Ioanna; Leaver, Michael J; Taggart, John B; George, Stephen G; Viant, Mark R; Chipman, Kevin J; Falciani, Francesco

    2011-08-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.

  2. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  3. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  4. Testing for Adaptation to Climate in Arabidopsis thaliana: A Calibrated Common Garden Approach

    PubMed Central

    Rutter, Matthew T.; Fenster, Charles B.

    2007-01-01

    Background and Aims A recent method used to test for local adaptation is a common garden experiment where analyses are calibrated to the environmental conditions of the garden. In this study the calibrated common garden approach is used to test for patterns of adaptation to climate in accessions of Arabidopsis thaliana. Methods Seedlings from 21 accessions of A. thaliana were planted outdoors in College Park, MD, USA, and development was monitored during the course of a growing season. ANOVA and multiple regression analysis were used to determine if development traits were significant predictors of plant success. Previously published data relating to accessional differences in genetic and physiological characters were also examined. Historical records of climate were used to evaluate whether properties of the site of origin of an accession affected the fitness of plants in a novel environment. Key Results By calibrating the analysis to the climatic conditions of the common garden site, performance differences were detected among the accessions consistent with a pattern of adaptation to latitude and climatic conditions. Relatively higher accession fitness was predicted by a latitude and climatic history similar to that of College Park in April and May during the main growth period of this experiment. The climatic histories of the accessions were better predictors of performance than many of the life-history and growth measures taken during the experiment. Conclusions It is concluded that the calibrated common garden experiment can detect local adaptation and guide subsequent reciprocal transplant experiments. PMID:17293351

  5. The co-adaptive neural network approach to the Euclidean Travelling Salesman Problem.

    PubMed

    Cochrane, E M; Beasley, J E

    2003-12-01

    In this paper we consider the Euclidean Travelling Salesman Problem (ETSP). This is the problem of finding the shortest tour around a number of cities where the cities correspond to points in the Euclidean plane and the distances between cities are given by the usual Euclidean distance metric. We present a review of the literature with respect to neural network (NN) approaches for the ETSP, and the computational results that have been reported. Based upon this review we highlight two areas that are, in our judgement, currently neglected/lacking in the literature. These are: failure to make significant use of publicly available ETSP test problems in computational work, failure to address co-operation between neurons. Drawing upon our literature survey this paper presents a new Self-Organising NN approach, called the Co-Adaptive Net, which involves not just unsupervised learning to train neurons, but also allows neurons to co-operate and compete amongst themselves depending on their situation. Our Co-Adaptive Net algorithm also includes a number of algorithmic mechanisms that, based upon our literature review, we consider to have contributed to the computational success of previous algorithms. Results for 91 publicly available standard ETSP's are presented in this paper. The largest of these problems involves 85,900 cities. This paper presents: the most extensive computational evaluation of any NN approach on publicly available ETSP test problems that has been made to date in the literature, a NN approach that performs better, with respect to solution quality and/or computation time, than other NN approaches given previously in the literature. Drawing upon computational results produced as a result of the DIMACS TSP Challenge, we highlight the fact that none of the current NN approaches for the ETSP can compete with state of the art Operations Research heuristics. We discuss why we consider continuing to study and develop NN approaches for the ETSP to be of value.

  6. An adaptive gating approach for x-ray dose reduction during cardiac interventional procedures

    SciTech Connect

    Abdel-Malek, A.; Yassa, F.; Bloomer, J. )

    1994-03-01

    The increasing number of cardiac interventional procedures has resulted in a tremendous increase in the absorbed x-ray dose by radiologists as well as patients. A new method is presented for x-ray dose reduction which utilizes adaptive tube pulse-rate scheduling in pulsed fluoroscopic systems. In the proposed system, pulse-rate scheduling depends on the heart muscle activity phase determined through continuous guided segmentation of the patient's electrocardiogram (ECG). Displaying images generated at the proposed adaptive nonuniform rate is visually unacceptable; therefore, a frame-filling approach is devised to ensure a 30 frame/sec display rate. The authors adopted two approaches for the frame-filling portion of the system depending on the imaging mode used in the procedure. During cine-mode imaging (high x-ray dose), collected image frame-to-frame pixel motion is estimated using a pel-recursive algorithm followed by motion-based pixel interpolation to estimate the frames necessary to increase the rate to 30 frames/sec. The other frame-filling approach is adopted during fluoro-mode imaging (low x-ray dose), characterized by low signal-to-noise ratio images. This approach consists of simply holding the last collected frame for as many frames as necessary to maintain the real-time display rate.

  7. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  8. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  9. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples

    PubMed Central

    Lyles, Robert H.; Van Domelen, Dane; Mitchell, Emily M.; Schisterman, Enrique F.

    2015-01-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  10. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  11. One Adaptive Synchronization Approach for Fractional-Order Chaotic System with Fractional-Order 1 < q < 2

    PubMed Central

    Zhou, Ping; Bai, Rongji

    2014-01-01

    Based on a new stability result of equilibrium point in nonlinear fractional-order systems for fractional-order lying in 1 < q < 2, one adaptive synchronization approach is established. The adaptive synchronization for the fractional-order Lorenz chaotic system with fractional-order 1 < q < 2 is considered. Numerical simulations show the validity and feasibility of the proposed scheme. PMID:25247207

  12. One adaptive synchronization approach for fractional-order chaotic system with fractional-order 1 < q < 2.

    PubMed

    Zhou, Ping; Bai, Rongji

    2014-01-01

    Based on a new stability result of equilibrium point in nonlinear fractional-order systems for fractional-order lying in 1 < q < 2, one adaptive synchronization approach is established. The adaptive synchronization for the fractional-order Lorenz chaotic system with fractional-order 1 < q < 2 is considered. Numerical simulations show the validity and feasibility of the proposed scheme. PMID:25247207

  13. Mobile membrane introduction tandem mass spectrometry for on-the-fly measurements and adaptive sampling of VOCs around oil and gas projects in Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.

    2012-12-01

    The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols

  14. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the NASA Gulfstream GIII test bed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight.

  15. Stereo matching based on adaptive support-weight approach in RGB vector space.

    PubMed

    Geng, Yingnan; Zhao, Yan; Chen, Hexin

    2012-06-01

    Gradient similarity is a simple, yet powerful, data descriptor which shows robustness in stereo matching. In this paper, a RGB vector space is defined for stereo matching. Based on the adaptive support-weight approach, a matching algorithm, which uses the pixel gradient similarity, color similarity, and proximity in RGB vector space to compute the corresponding support-weights and dissimilarity measurements, is proposed. The experimental results are evaluated on the Middlebury stereo benchmark, showing that our algorithm outperforms other stereo matching algorithms and the algorithm with gradient similarity can achieve better results in stereo matching. PMID:22695592

  16. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  17. Evidence-Based Approach to Treating Lateral Epicondylitis Using the Occupational Adaptation Model.

    PubMed

    Bachman, Stephanie

    2016-01-01

    The occupational therapy Centennial Vision reinforces the importance of informing consumers about the benefit of occupational therapy and continuing to advocate for the unique client-centered role of occupational therapy. Occupational therapy practitioners working in hand therapy have traditionally found it difficult to combine the biomechanical foundations of hand therapy with the fundamental client-centered tenets of occupational therapy. Embracing our historical roots will become more important as health care evolves and third-party payers continue to scrutinize the need for the profession of occupational therapy. This article outlines a client-centered approach for hand therapists for the treatment of lateral epicondylitis using the Occupational Adaptation Model.

  18. Developing integrated approaches to climate change adaptation in rural communities of the Peruvian Andes

    NASA Astrophysics Data System (ADS)

    Huggel, Christian

    2010-05-01

    Over centuries, Andean communities have developed strategies to cope with climate variability and extremes, such as cold waves or droughts, which can have severe impacts on their welfare. Nevertheless, the rural population, living at altitudes of 3000 to 4000 m asl or even higher, remains highly vulnerable to external stresses, partly because of the extreme living conditions, partly as a consequence of high poverty. Moreover, recent studies indicate that climatic extreme events have increased in frequency in the past years. A Peruvian-Swiss Climate Change Adaptation Programme in Peru (PACC) is currently undertaking strong efforts to understand the links between climatic conditions and local livelihood assets. The goal is to propose viable strategies for adaptation in collaboration with the local population and governments. The program considers three main areas of action, i.e. (i) water resource management; (ii) disaster risk reduction; and (iii) food security. The scientific studies carried out within the programme follow a highly transdisciplinary approach, spanning the whole range from natural and social sciences. Moreover, the scientific Peruvian-Swiss collaboration is closely connected to people and institutions operating at the implementation and political level. In this contribution we report on first results of thematic studies, address critical questions, and outline the potential of integrative research for climate change adaptation in mountain regions in the context of a developing country.

  19. Impact of adaptive signal control on major and minor approach delay

    SciTech Connect

    Wolshon, B.; Taylor, W.C.

    1999-01-01

    One of the primary difficulties in completing evaluations of real-time adaptive traffic signal systems has been the lack of effective modeling tools to perform controlled and repeatable analytical experiments. Recently, a method to analyze the delay characteristics of an existing real-time adaptive traffic signal control system has been developed. The procedure has been used to evaluate the performance of the Sydney Coordinated Adaptive Traffic System (SCATS) in South Lyon, Michigan. The objective of this study was to more closely examine the relationship of major and minor street delay under the SCATS and fixed-time forms of signal control. The effect on left turn delay at critical locations was also compared to the delay change in the through traffic. The comparisons showed that SCATS control resulted in significant delay reductions to left turn traffic compared to the through movements. The improvement was especially pronounced at the major street approaches. To further address the delay effect of SCATS control in South Lyon, the relationships between volume, delay, and green time of the various traffic moments were also compared. These comparisons showed that SCATS tended to allocate more green time to left turn traffic when compared to the fixed-time system.

  20. New approach for selecting pectinase producing mutants of Aspergillus niger well adapted to solid state fermentation.

    PubMed

    Antier, P; Minjares, A; Roussos, S; Viniegra-González, G

    1993-01-01

    The aim of this paper is to review and study a new approach for improving strains of Aspergillus niger specially adapted to produce pectinases by Solid State Fermentation (SSF) with materials having low levels of water activity (a(w)), i.e., coffee pulp. Special emphasis is placed on the use of two antimetabolic compounds: 2-deoxy-glucose (DG) and 2,4-dinitro-phenol (DNP) combined with a water depressant (ethylene glycol = EG) in order to put strong selection pressures on UV treated spores from parental strain C28B25 isolated from a coffee plantation. Such a strain was found to be DG sensitive. Results suggested the existence of a reciprocal relation between adaptation of isolated strains to SSF or to Submerged Fermentation (SmF) systems. Preliminary physiological analysis of isolated strains showed that at least some few initially DG resistant mutants could revert to DG sensitive phenotype but conserving increased pectinase production. Also it was found that phenotype for DNP resistance could be associated to changes of DG resistance. Finally, it was found that low levels of a(w) produced by adding 15% EG to agar plates, were a significant selection factor for strains well adapted to SSF system.

  1. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health

    PubMed Central

    Zafra-Cabeza, Ascensión; Rivera, Daniel E.; Collins, Linda M.; Ridao, Miguel A.; Camacho, Eduardo F.

    2010-01-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450

  2. A systematic approach to cross-cultural adaptation of survey tools

    PubMed Central

    Costa, Filipa A.; Duggan, Catherine; Bates, Ian

    Background Involving patients in health care is increasingly acknowledged as the best way to empower patients to manage their illness. Whilst the involvement of patients is laudable and widely recognised, how much they want to be involved needs to be ascertained. Research has shown that inappropriate provision of information to patients can increase their anxieties towards illness and alter perceptions of medicines’ usefulness, consequently impacting on medicines’ taking behaviour. Tools have been validated in the UK to identify information desires, perceived usefulness of medicines and anxiety felt about illness. There is a need to adapt validated tools for use in other settings and countries. This paper is the first of a series describing the processes involved in the adaptation and validation of these. Aim to review and adapt the processes established to translate and back translate scales and tools in practice. Methods The survey tool was translated and back- translated according to published guidelines, subsequently tested in a sample of medical patients and further refined by seeking health care professionals’ perceptions and input from lay people. Results Data demonstrates the importance of including various perspectives in this process, through which sequential modifications were made to the original scales. Issues relating to religious beliefs, educational and health literacy differences between countries highlight the relevance of taking cultural values into account. Some led to significant modifications, discussed in this first paper, and tested for validity and reliability in a second paper. PMID:25214927

  3. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  4. Interindividual differences in intraindividual changes in proactivity during organizational entry: a latent growth modeling approach to understanding newcomer adaptation.

    PubMed

    Chan, D; Schmitt, N

    2000-04-01

    Intraindividual change over time is the essence of the change phenomenon hypothesized to occur in the individual newcomer adaptation process. Many important adaptation questions cannot be answered without an adequate conceptualization and assessment of intraindividual change. Using a latent growth modeling approach to data collected from 146 doctoral program newcomers over 4 repeated measurements spaced at 1-month intervals, the authors explicitly modeled interindividual differences in intraindividual changes in newcomer proactivities (information seeking, relationship building) and proximal adaptation outcomes (task mastery, role clarity, social integration) during organizational entry. Results indicated that changes in proactivity may be related to newcomer characteristics and adaptation outcomes in interesting ways that have not been previously examined.

  5. Further results on the L1 analysis of sampled-data systems via kernel approximation approach

    NASA Astrophysics Data System (ADS)

    Kim, Jung Hoon; Hagiwara, Tomomichi

    2016-08-01

    This paper gives two methods for the L1 analysis of sampled-data systems, by which we mean computing the L∞-induced norm of sampled-data systems. This is achieved by developing what we call the kernel approximation approach in the setting of sampled-data systems. We first consider the lifting treatment of sampled-data systems and give an operator theoretic representation of their input/output relation. We further apply the fast-lifting technique by which the sampling interval [0, h) is divided into M subintervals with an equal width, and provide methods for computing the L∞-induced norm. In contrast to a similar approach developed earlier called the input approximation approach, we use an idea of kernel approximation, in which the kernel function of an input operator and the hold function of an output operator are approximated by piecewise constant or piecewise linear functions. Furthermore, it is shown that the approximation errors in the piecewise constant approximation or piecewise linear approximation scheme converge to 0 at the rate of 1/M or 1/M2, respectively. In comparison with the existing input approximation approach, in which the input function (rather than the kernel function) of the input operator is approximated by piecewise constant or piecewise linear functions, we show that the kernel approximation approach gives improved computation results. More precisely, even though the convergence rates in the kernel approximation approach remain qualitatively the same as those in the input approximation approach, the newly developed former approach could lead to quantitatively improved approximation errors than the latter approach particularly when the piecewise linear approximation scheme is taken. Finally, a numerical example is given to demonstrate the effectiveness of the kernel approximation approach with this scheme.

  6. Adaptation of Cryo-Sectioning for IEM Labeling of Asymmetric Samples: A Study Using Caenorhabditis elegans.

    PubMed

    Nicolle, Ophélie; Burel, Agnès; Griffiths, Gareth; Michaux, Grégoire; Kolotuev, Irina

    2015-08-01

    Cryo-sectioning procedures, initially developed by Tokuyasu, have been successfully improved for tissues and cultured cells, enabling efficient protein localization on the ultrastructural level. Without a standard procedure applicable to any sample, currently existing protocols must be individually modified for each model organism or asymmetric sample. Here, we describe our method that enables reproducible cryo-sectioning of Caenorhabditis elegans larvae/adults and embryos. We have established a chemical-fixation procedure in which flat embedding considerably simplifies manipulation and lateral orientation of larvae or adults. To bypass the limitations of chemical fixation, we have improved the hybrid cryo-immobilization-rehydration technique and reduced the overall time required to complete this procedure. Using our procedures, precise cryo-sectioning orientation can be combined with good ultrastructural preservation and efficient immuno-electron microscopy protein localization. Also, GFP fluorescence can be efficiently preserved, permitting a direct correlation of the fluorescent signal and its subcellular localization. Although developed for C. elegans samples, our method addresses the challenge of working with small asymmetric samples in general, and thus could be used to improve the efficiency of immuno-electron localization in other model organisms.

  7. Reliability and Validity of the Spanish Adaptation of EOSS, Comparing Normal and Clinical Samples

    ERIC Educational Resources Information Center

    Valero-Aguayo, Luis; Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; de Huralde, Ma. Angeles Selva-Lopez

    2012-01-01

    The Experiencing of Self Scale (EOSS) was created for the evaluation of Functional Analytic Psychotherapy (Kohlenberg & Tsai, 1991, 2001, 2008) in relation to the concept of the experience of personal self as socially and verbally constructed. This paper presents a reliability and validity study of the EOSS with a Spanish sample (582 participants,…

  8. Where do adaptive shifts occur during invasion A multidisciplinary approach to unravel cold adaptation in a tropical ant species invading the Mediterranean zone

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Although evolution is now recognized as improving the invasive success of populations, where and when key adaptation event(s) occur often remains unclear. Here we used a multidisciplinary approach to disentangle the eco-evolutionary scenario of invasion of a Mediterranean zone (i.e. Israel) by the t...

  9. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    NASA Astrophysics Data System (ADS)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  10. Adaptive Filter-bank Approach to Restoration and Spectral Analysis of Gapped Data

    NASA Astrophysics Data System (ADS)

    Stoica, Petre; Larsson, Erik G.; Li, Jian

    2000-10-01

    The main topic of this paper is the nonparametric estimation of complex (both amplitude and phase) spectra from gapped data, as well as the restoration of such data. The focus is on the extension of the APES (amplitude and phase estimation) approach to data sequences with gaps. APES, which is one of the most successful existing nonparametric approaches to the spectral analysis of full data sequences, uses a bank of narrowband adaptive (both frequency and data dependent) filters to estimate the spectrum. A recent interpretation of this approach showed that the filterbank used by APES and the resulting spectrum minimize a least-squares (LS) fitting criterion between the filtered sequence and its spectral decomposition. The extended approach, which is called GAPES for somewhat obvious reasons, capitalizes on the aforementioned interpretation: it minimizes the APES-LS fitting criterion with respect to the missing data as well. This should be a sensible thing to do whenever the full data sequence is stationary, and hence the missing data have the same spectral content as the available data. We use both simulated and real data examples to show that GAPES estimated spectra and interpolated data sequences have excellent accuracy. We also show the performance gain achieved by GAPES over two of the most commonly used approaches for gapped-data spectral analysis, viz., the periodogram and the parametric CLEAN method. This work was partly supported by the Swedish Foundation for Strategic Research.

  11. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics). PMID:24806652

  12. Thriving while engaging in risk? Examining trajectories of adaptive functioning, delinquency, and substance use in a nationally representative sample of U.S. adolescents.

    PubMed

    Warren, Michael T; Wray-Lake, Laura; Rote, Wendy M; Shubert, Jennifer

    2016-02-01

    Recent advances in positive youth development theory and research explicate complex associations between adaptive functioning and risk behavior, acknowledging that high levels of both co-occur in the lives of some adolescents. However, evidence on nuanced overlapping developmental trajectories of adaptive functioning and risk has been limited to 1 sample of youth and a single conceptualization of adaptive functioning. We build on prior work by utilizing a nationally representative sample of U.S. adolescents (N = 1,665) followed from 7th grade until after high school and using a measure of adaptive functioning that was validated in a secondary sample of older adolescents (N = 93). In using dual trajectory growth mixture modeling to investigate links between developmental trajectories of adaptive functioning and delinquency and substance use, respectively, results provided evidence of heterogeneity in the overlap between adaptive functioning and risk trajectories. Males were more likely to be in the highest adaptive functioning group as well as the most at-risk delinquency class. The magnitude of negative associations between adaptive functioning and both risk behaviors decreased at Wave 3, indicating a decoupling of adaptive functioning and risk as youth aged. These findings converge in underscoring the need to generate a cohesive theory that specifies factors that promote adaptive functioning and risk in concert.

  13. Acquiring Peak Samples from Phytoplankton Thin Layers and Intermediate Nepheloid Layers by an Autonomous Underwater Vehicle with Adaptive Triggering

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; McEwen, R.; Ryan, J. P.; Bellingham, J. G.; Harvey, J.; Vrijenhoek, R.

    2010-12-01

    Phytoplankton thin layers (PTLs) affect many fundamental aspects of coastal ocean ecology including primary productivity, development of harmful algal blooms (HABs) and the survival and growth of zooplankton and fish larvae. Intermediate nepheloid layers (INLs) that contain suspended particulate matter transported from the bottom boundary layer of continental shelves and slopes also affect biogeochemistry and ecology of ocean margins. To better understand the impacts of these types of layers, we have developed an adaptive sampling method for an autonomous underwater vehicle (AUV) to detect a layer (adjusting detection parameters in situ), acquire water samples from peaks in the layer, and acquire control samples outside the layer. We have used the method in a number of field experiments with the AUV Dorado, which is equipped with ten water samplers (called "gulpers"). In real time, the algorithm tracks background levels of fluorescence and optical backscatter and the peaks' baseline to ensure that detection is tuned to the ambient conditions. The algorithm cross-checks fluorescence and backscatter signals to differentiate PTLs from INLs. To capture peak water samples with minimal delay, the algorithm exploits the AUV's sawtooth (i.e., yo-yo) trajectory: the vehicle crosses the detected layer twice in one yo-yo cycle. At the first crossing, it detects the layer's peak and saves its signal height. Sampling is triggered at the second crossing when the signal reaches the saved peak height plus meeting additional timing and depth conditions. The algorithm is also capable of triggering gulpers to acquire control samples outside the layer for comparison with ambient water. The sequence of peak and control samples can be set based on need. In recent AUV Dorado missions, the algorithm triggered the gulpers to acquire peak and control samples from INLs and PTLs in Monterey Bay. Zooplankton analysis of some peak samples showed very high concentrations of mussel and barnacle

  14. Difference, adapted physical activity and human development: potential contribution of capabilities approach.

    PubMed

    Silva, Carla Filomena; Howe, P David

    2012-01-01

    This paper is a call to Adapted Physical Activity (APA) professionals to increase the reflexive nature of their practice. Drawing upon Foucault's concept of governmentality (1977) APA action may work against its own publicized goals of empowerment and self-determination. To highlight these inconsistencies, we will draw upon historical and social factors that explain the implicit dangers of practice not following policy. We propose that APA practitioners work according to ethical guidelines, based upon a capabilities approach (Nussbaum, 2006, 2011; Sen, 2009) to counteract possible adverse effects of APA practitioner action. A capabilities approach is conducive to the development of each individual's human potential, by holistically considering the consequences of physical activity (i.e., biological, cultural, social, and psychological dimensions). To conclude, this paper will offer suggestions that may lead to an ethical reflection aligned with the best interest of APA's users.

  15. Adaptive life simulator: A novel approach to modeling the cardiovascular system

    SciTech Connect

    Kangas, L.J.; Keller, P.E.; Hashem, S.

    1995-06-01

    In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  16. Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering.

    PubMed

    Sudeep, P V; Issac Niwas, S; Palanisamy, P; Rajan, Jeny; Xiaojun, Yu; Wang, Xianghong; Luo, Yuemei; Liu, Linbo

    2016-04-01

    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. PMID:26907572

  17. Adaptive MANET multipath routing algorithm based on the simulated annealing approach.

    PubMed

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  18. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.

    PubMed

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-10-26

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange.

  19. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  20. Adaptive speed/position control of induction motor based on SPR approach

    NASA Astrophysics Data System (ADS)

    Lee, Hou-Tsan

    2014-11-01

    A sensorless speed/position tracking control scheme for induction motors is proposed subject to unknown load torque via adaptive strictly positive real (SPR) approach design. A special nonlinear coordinate transform is first provided to reform the dynamical model of the induction motor. The information on rotor fluxes can thus be derived from the dynamical model to decide on the proportion of input voltage in the d-q frame under the constraint of the maximum power transfer property of induction motors. Based on the SPR approach, the speed and position control objectives can be achieved. The proposed control scheme is to provide the speed/position control of induction motors while lacking the knowledge of some mechanical system parameters, such as the motor inertia, motor damping coefficient, and the unknown payload. The adaptive control technique is thus involved in the field oriented control scheme to deal with the unknown parameters. The thorough proof is derived to guarantee the stability of the speed and position of control systems of induction motors. Besides, numerical simulation and experimental results are also provided to validate the effectiveness of the proposed control scheme.

  1. Wavefront sensorless approaches to adaptive optics for in vivo fluorescence imaging of mouse retina

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel J.; Bonora, Stefano; Mata, Oscar S.; Haunerland, Bengt K.; Zawadzki, Robert J.; Sarunic, Marinko V.; Jian, Yifan

    2016-03-01

    Adaptive optics (AO) is necessary to correct aberrations when imaging the mouse eye with high numerical aperture. In order to obtain cellular resolution, we have implemented wavefront sensorless adaptive optics for in vivo fluorescence imaging of mouse retina. Our approach includes a lens-based system and MEMS deformable mirror for aberration correction. The AO system was constructed with a reflectance channel for structural images and fluorescence channel for functional images. The structural imaging was used in real-time for navigation on the retina using landmarks such as blood vessels. We have also implemented a tunable liquid lens to select the retinal layer of interest at which to perform the optimization. At the desired location on the mouse retina, the optimization algorithm used the fluorescence image data to drive a modal hill-climbing algorithm using an intensity or sharpness image quality metric. The optimization requires ~30 seconds to complete a search up to the 20th Zernike mode. In this report, we have demonstrated the AO performance for high-resolution images of the capillaries in a fluorescence angiography. We have also made progress on an approach to AO with pupil segmentation as a possible sensorless technique suitable for small animal retinal imaging. Pupil segmentation AO was implemented on the same ophthalmic system and imaging performance was demonstrated on fluorescent beads with induced aberrations.

  2. Social Daydreaming and Adjustment: An Experience-Sampling Study of Socio-Emotional Adaptation During a Life Transition.

    PubMed

    Poerio, Giulia L; Totterdell, Peter; Emerson, Lisa-Marie; Miles, Eleanor

    2016-01-01

    Estimates suggest that up to half of waking life is spent daydreaming; that is, engaged in thought that is independent of, and unrelated to, one's current task. Emerging research indicates that daydreams are predominately social suggesting that daydreams may serve socio-emotional functions. Here we explore the functional role of social daydreaming for socio-emotional adjustment during an important and stressful life transition (the transition to university) using experience-sampling with 103 participants over 28 days. Over time, social daydreams increased in their positive characteristics and positive emotional outcomes; specifically, participants reported that their daydreams made them feel more socially connected and less lonely, and that the content of their daydreams became less fanciful and involved higher quality relationships. These characteristics then predicted less loneliness at the end of the study, which, in turn was associated with greater social adaptation to university. Feelings of connection resulting from social daydreams were also associated with less emotional inertia in participants who reported being less socially adapted to university. Findings indicate that social daydreaming is functional for promoting socio-emotional adjustment to an important life event. We highlight the need to consider the social content of stimulus-independent cognitions, their characteristics, and patterns of change, to specify how social thoughts enable socio-emotional adaptation.

  3. Social Daydreaming and Adjustment: An Experience-Sampling Study of Socio-Emotional Adaptation During a Life Transition

    PubMed Central

    Poerio, Giulia L.; Totterdell, Peter; Emerson, Lisa-Marie; Miles, Eleanor

    2016-01-01

    Estimates suggest that up to half of waking life is spent daydreaming; that is, engaged in thought that is independent of, and unrelated to, one’s current task. Emerging research indicates that daydreams are predominately social suggesting that daydreams may serve socio-emotional functions. Here we explore the functional role of social daydreaming for socio-emotional adjustment during an important and stressful life transition (the transition to university) using experience-sampling with 103 participants over 28 days. Over time, social daydreams increased in their positive characteristics and positive emotional outcomes; specifically, participants reported that their daydreams made them feel more socially connected and less lonely, and that the content of their daydreams became less fanciful and involved higher quality relationships. These characteristics then predicted less loneliness at the end of the study, which, in turn was associated with greater social adaptation to university. Feelings of connection resulting from social daydreams were also associated with less emotional inertia in participants who reported being less socially adapted to university. Findings indicate that social daydreaming is functional for promoting socio-emotional adjustment to an important life event. We highlight the need to consider the social content of stimulus-independent cognitions, their characteristics, and patterns of change, to specify how social thoughts enable socio-emotional adaptation. PMID:26834685

  4. A modified RNA-Seq approach for whole genome sequencing of RNA viruses from faecal and blood samples.

    PubMed

    Batty, Elizabeth M; Wong, T H Nicholas; Trebes, Amy; Argoud, Karène; Attar, Moustafa; Buck, David; Ip, Camilla L C; Golubchik, Tanya; Cule, Madeleine; Bowden, Rory; Manganis, Charis; Klenerman, Paul; Barnes, Eleanor; Walker, A Sarah; Wyllie, David H; Wilson, Daniel J; Dingle, Kate E; Peto, Tim E A; Crook, Derrick W; Piazza, Paolo

    2013-01-01

    To date, very large scale sequencing of many clinically important RNA viruses has been complicated by their high population molecular variation, which creates challenges for polymerase chain reaction and sequencing primer design. Many RNA viruses are also difficult or currently not possible to culture, severely limiting the amount and purity of available starting material. Here, we describe a simple, novel, high-throughput approach to Norovirus and Hepatitis C virus whole genome sequence determination based on RNA shotgun sequencing (also known as RNA-Seq). We demonstrate the effectiveness of this method by sequencing three Norovirus samples from faeces and two Hepatitis C virus samples from blood, on an Illumina MiSeq benchtop sequencer. More than 97% of reference genomes were recovered. Compared with Sanger sequencing, our method had no nucleotide differences in 14,019 nucleotides (nt) for Noroviruses (from a total of 2 Norovirus genomes obtained with Sanger sequencing), and 8 variants in 9,542 nt for Hepatitis C virus (1 variant per 1,193 nt). The three Norovirus samples had 2, 3, and 2 distinct positions called as heterozygous, while the two Hepatitis C virus samples had 117 and 131 positions called as heterozygous. To confirm that our sample and library preparation could be scaled to true high-throughput, we prepared and sequenced an additional 77 Norovirus samples in a single batch on an Illumina HiSeq 2000 sequencer, recovering >90% of the reference genome in all but one sample. No discrepancies were observed across 118,757 nt compared between Sanger and our custom RNA-Seq method in 16 samples. By generating viral genomic sequences that are not biased by primer-specific amplification or enrichment, this method offers the prospect of large-scale, affordable studies of RNA viruses which could be adapted to routine diagnostic laboratory workflows in the near future, with the potential to directly characterize within-host viral diversity.

  5. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  6. Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1987-01-01

    This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.

  7. Behavior Change Interventions to Improve the Health of Racial and Ethnic Minority Populations: A Tool Kit of Adaptation Approaches

    PubMed Central

    Davidson, Emma M; Liu, Jing Jing; Bhopal, Raj; White, Martin; Johnson, Mark RD; Netto, Gina; Wabnitz, Cecile; Sheikh, Aziz

    2013-01-01

    Context Adapting behavior change interventions to meet the needs of racial and ethnic minority populations has the potential to enhance their effectiveness in the target populations. But because there is little guidance on how best to undertake these adaptations, work in this field has proceeded without any firm foundations. In this article, we present our Tool Kit of Adaptation Approaches as a framework for policymakers, practitioners, and researchers interested in delivering behavior change interventions to ethnically diverse, underserved populations in the United Kingdom. Methods We undertook a mixed-method program of research on interventions for smoking cessation, increasing physical activity, and promoting healthy eating that had been adapted to improve salience and acceptability for African-, Chinese-, and South Asian–origin minority populations. This program included a systematic review (reported using PRISMA criteria), qualitative interviews, and a realist synthesis of data. Findings We compiled a richly informative data set of 161 publications and twenty-six interviews detailing the adaptation of behavior change interventions and the contexts in which they were undertaken. On the basis of these data, we developed our Tool Kit of Adaptation Approaches, which contains (1) a forty-six-item Typology of Adaptation Approaches; (2) a Pathway to Adaptation, which shows how to use the Typology to create a generic behavior change intervention; and (3) RESET, a decision tool that provides practical guidance on which adaptations to use in different contexts. Conclusions Our Tool Kit of Adaptation Approaches provides the first evidence-derived suite of materials to support the development, design, implementation, and reporting of health behavior change interventions for minority groups. The Tool Kit now needs prospective, empirical evaluation in a range of intervention and population settings. PMID:24320170

  8. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  9. Avoidance and activation as keys to depression: adaptation of the Behavioral Activation for Depression Scale in a Spanish sample.

    PubMed

    Barraca, Jorge; Pérez-Alvarez, Marino; Lozano Bleda, José Héctor

    2011-11-01

    In this paper we present the adaptation of the Behavioral Activation for Depression Scale (BADS), developed by Kanter, Mulick, Busch, Berlin, and Martell (2007), in a Spanish sample. The psychometric properties were tested in a sample of 263 participants (124 clinical and 139 non-clinical). The results show that, just as in the original English version, the Spanish BADS is a valid and internally consistent scale. Construct validity was examined by correlation with the BDI-II, AAQ, ATQ, MCQ-30, STAI and EROS. Factor analysis justified the four-dimensions of the original instrument (Activation, Avoidance/Rumination, Work/School Impairment and Social Impairment), although with some differences in the factor loadings of the items. Further considerations about the usefulness of the BADS in the clinical treatment of depressed patients are also suggested.

  10. Integrated approaches to natural resources management in practice: the catalyzing role of National Adaptation Programmes for Action.

    PubMed

    Stucki, Virpi; Smith, Mark

    2011-06-01

    The relationship of forests in water quantity and quality has been debated during the past years. At the same time, focus on climate change has increased interest in ecosystem restoration as a means for adaptation. Climate change might become one of the key drivers pushing integrated approaches for natural resources management into practice. The National Adaptation Programme of Action (NAPA) is an initiative agreed under the UN Framework Convention on Climate Change. An analysis was done to find out how widely ecosystem restoration and integrated approaches have been incorporated into NAPA priority adaptation projects. The data show that that the NAPAs can be seen as potentially important channel for operationalizing various integrated concepts. Key challenge is to implement the NAPA projects. The amount needed to implement the NAPA projects aiming at ecosystem restoration using integrated approaches presents only 0.7% of the money pledged in Copenhagen for climate change adaptation.

  11. Massively parallel sampling of lattice proteins reveals foundations of thermal adaptation

    NASA Astrophysics Data System (ADS)

    Venev, Sergey V.; Zeldovich, Konstantin B.

    2015-08-01

    Evolution of proteins in bacteria and archaea living in different conditions leads to significant correlations between amino acid usage and environmental temperature. The origins of these correlations are poorly understood, and an important question of protein theory, physics-based prediction of types of amino acids overrepresented in highly thermostable proteins, remains largely unsolved. Here, we extend the random energy model of protein folding by weighting the interaction energies of amino acids by their frequencies in protein sequences and predict the energy gap of proteins designed to fold well at elevated temperatures. To test the model, we present a novel scalable algorithm for simultaneous energy calculation for many sequences in many structures, targeting massively parallel computing architectures such as graphics processing unit. The energy calculation is performed by multiplying two matrices, one representing the complete set of sequences, and the other describing the contact maps of all structural templates. An implementation of the algorithm for the CUDA platform is available at http://www.github.com/kzeldovich/galeprot and calculates protein folding energies over 250 times faster than a single central processing unit. Analysis of amino acid usage in 64-mer cubic lattice proteins designed to fold well at different temperatures demonstrates an excellent agreement between theoretical and simulated values of energy gap. The theoretical predictions of temperature trends of amino acid frequencies are significantly correlated with bioinformatics data on 191 bacteria and archaea, and highlight protein folding constraints as a fundamental selection pressure during thermal adaptation in biological evolution.

  12. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  13. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, E.F.; Boase, J.; Kennedy, G.; Craig, J.; Soper, K.

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8m) Detroit River using an anchor and buoy system. ?? 2011 Blackwell Verlag, Berlin.

  14. Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling.

    PubMed

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2007-12-01

    Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate

  15. Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling.

    PubMed

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2007-12-01

    Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate

  16. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers.

  17. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics

    PubMed Central

    Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.

    2015-01-01

    Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414

  18. A User-Driven and Data-Driven Approach for Supporting Teachers in Reflection and Adaptation of Adaptive Tutorials

    ERIC Educational Resources Information Center

    Ben-Naim, Dror; Bain, Michael; Marcus, Nadine

    2009-01-01

    It has been recognized that in order to drive Intelligent Tutoring Systems (ITSs) into mainstream use by the teaching community, it is essential to support teachers through the entire ITS process: Design, Development, Deployment, Reflection and Adaptation. Although research has been done on supporting teachers through design to deployment of ITSs,…

  19. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  20. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  1. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  2. 76 FR 65165 - Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... Animal and Plant Health Inspection Service Importation of Plants for Planting; Risk-Based Sampling and...-based sampling approach for the inspection of imported plants for planting. In our previous approach, we... risk posed by the plants for planting. The risk-based sampling and inspection approach will allow us...

  3. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Huff, Daniel W.

    computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.

  4. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  5. Characterization of GM events by insert knowledge adapted re-sequencing approaches.

    PubMed

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-01-01

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events. PMID:24088728

  6. Seeking mathematics success for college students: a randomized field trial of an adapted approach

    NASA Astrophysics Data System (ADS)

    Gula, Taras; Hoessler, Carolyn; Maciejewski, Wes

    2015-11-01

    Many students enter the Canadian college system with insufficient mathematical ability and leave the system with little improvement. Those students who enter with poor mathematics ability typically take a developmental mathematics course as their first and possibly only mathematics course. The educational experiences that comprise a developmental mathematics course vary widely and are, too often, ineffective at improving students' ability. This trend is concerning, since low mathematics ability is known to be related to lower rates of success in subsequent courses. To date, little attention has been paid to the selection of an instructional approach to consistently apply across developmental mathematics courses. Prior research suggests that an appropriate instructional method would involve explicit instruction and practising mathematical procedures linked to a mathematical concept. This study reports on a randomized field trial of a developmental mathematics approach at a college in Ontario, Canada. The new approach is an adaptation of the JUMP Math program, an explicit instruction method designed for primary and secondary school curriculae, to the college learning environment. In this study, a subset of courses was assigned to JUMP Math and the remainder was taught in the same style as in the previous years. We found consistent, modest improvement in the JUMP Math sections compared to the non-JUMP sections, after accounting for potential covariates. The findings from this randomized field trial, along with prior research on effective education for developmental mathematics students, suggest that JUMP Math is a promising way to improve college student outcomes.

  7. An efficient and self-adapted approach to the sharpening of color images.

    PubMed

    Kau, Lih-Jen; Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  8. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  9. Adaptation policies to increase terrestrial ecosystem resilience. Potential utility of a multicriteria approach

    SciTech Connect

    de Bremond, Ariane; Engle, Nathan L.

    2014-01-30

    Climate change is rapidly undermining terrestrial ecosystem resilience and capacity to continue providing their services to the benefit of humanity and nature. Because of the importance of terrestrial ecosystems to human well-being and supporting services, decision makers throughout the world are busy creating policy responses that secure multiple development and conservation objectives- including that of supporting terrestrial ecosystem resilience in the context of climate change. This article aims to advance analyses on climate policy evaluation and planning in the area of terrestrial ecosystem resilience by discussing adaptation policy options within the ecology-economy-social nexus. The paper evaluates these decisions in the realm of terrestrial ecosystem resilience and evaluates the utility of a set of criteria, indicators, and assessment methods, proposed by a new conceptual multi-criteria framework for pro-development climate policy and planning developed by the United Nations Environment Programme. Potential applications of a multicriteria approach to climate policy vis-A -vis terrestrial ecosystems are then explored through two hypothetical case study examples. The paper closes with a brief discussion of the utility of the multi-criteria approach in the context of other climate policy evaluation approaches, considers lessons learned as a result efforts to evaluate climate policy in the realm of terrestrial ecosystems, and reiterates the role of ecosystem resilience in creating sound policies and actions that support the integration of climate change and development goals.

  10. Characterization of GM events by insert knowledge adapted re-sequencing approaches

    PubMed Central

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-01-01

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events. PMID:24088728

  11. Data-adaptive unfolding of nuclear excitation spectra: a time-series approach

    NASA Astrophysics Data System (ADS)

    Torres Vargas, G.; Fossion, R.; Velázquez, V.; López Vieyra, J. C.

    2014-03-01

    A common problem in the statistical characterization of the excitation spectrum of quantum systems is the adequate separation of global system-dependent properties from the local fluctuations that are universal. In this process, called unfolding, the functional form to describe the global behaviour is often imposed externally on the data and can introduce arbitrarities in the statistical results. In this contribution, we show that a quantum excitation spectrum can readily be interpreted as a time series, before any previous unfolding. An advantage of the time-series approach is that specialized methods such as Singular Spectrum Analysis (SSA) can be used to perform the unfolding procedure in a data-adaptive way. We will show how SSA separates the components that describe the global properties from the components that describe the local fluctuations. The partial variances, associated with the fluctuations, follow a definite power law that distinguishes between soft and rigid excitation spectra. The data-adaptive fluctuation and trend components can be used to reconstruct customary fluctuation measures without ambiguities or artifacts introduced by an arbitrary unfolding, and also define the global level density of the excitation spectrum. The method is applied to nuclear shell-model calculations for 48Ca, using a realistic force and Two-Body Random Ensemble (TBRE) interactions. We show that the statistical results are very robust against a variation in the parameters of the SSA method.

  12. Thermal genetic adaptation in the water flea Daphnia and its impact: an evolving metacommunity approach.

    PubMed

    De Meester, Luc; Van Doorslaer, Wendy; Geerts, Aurora; Orsini, Luisa; Stoks, Robby

    2011-11-01

    Genetic adaptation to temperature change can impact responses of populations and communities to global warming. Here we integrate previously published results on experimental evolution trials with follow-up experiments involving the water flea Daphnia as a model system. Our research shows (1) the capacity of natural populations of this species to genetically adapt to changes in temperature in a time span of months to years, (2) the context-dependence of these genetic changes, emphasizing the role of ecology and community composition on evolutionary responses to climatic change, and (3) the impact of micro-evolutionary changes on immigration success of preadapted genotypes. Our study involves (1) experimental evolution trials in the absence and presence of the community of competitors, predators, and parasites, (2) life-table and competition experiments to assess the fitness consequences of micro-evolution, and (3) competition experiments with putative immigrant genotypes. We use these observations as building blocks of an evolving metacommunity to understand biological responses to climatic change. This approach integrates both local and regional responses at both the population and community levels. Finally, we provide an outline of current gaps in knowledge and suggest fruitful avenues for future research.

  13. Formation tracker design of multiple mobile robots with wheel perturbations: adaptive output-feedback approach

    NASA Astrophysics Data System (ADS)

    Yoo, Sung Jin

    2016-11-01

    This paper presents a theoretical design approach for output-feedback formation tracking of multiple mobile robots under wheel perturbations. It is assumed that these perturbations are unknown and the linear and angular velocities of the robots are unmeasurable. First, adaptive state observers for estimating unmeasurable velocities of the robots are developed under the robots' kinematics and dynamics including wheel perturbation effects. Then, we derive a virtual-structure-based formation tracker scheme according to the observer dynamic surface design procedure. The main difficulty of the output-feedback control design is to manage the coupling problems between unmeasurable velocities and unknown wheel perturbation effects. These problems are avoided by using the adaptive technique and the function approximation property based on fuzzy logic systems. From the Lyapunov stability analysis, it is shown that point tracking errors of each robot and synchronisation errors for the desired formation converge to an adjustable neighbourhood of the origin, while all signals in the controlled closed-loop system are semiglobally uniformly ultimately bounded.

  14. Supportive Communication to Facilitate Chinese Patients' Adaptation to a Permanent Colostomy: A Qualitative Case Study Approach.

    PubMed

    Tao, Hui; Songwathana, Praneed; Isaramalai, Sang-Arun; Wang, Qingxi

    2016-01-01

    This study, which is a part of action research, aims to explore how supportive communication can impact individuals' adaptation to a permanent colostomy in a Chinese cultural context. Two Chinese rectal cancer patients with complexity and difficulty in living with a permanent colostomy were selected using a qualitative case study approach. The researcher (H.T.) interacted with the participants along their journey from the preoperative period until the third postoperative month after discharge via face-to-face or telephone interviews. Content analysis was applied. Supportive communication was characterized by "communication as a supportive tool," which consisted of 4 elements: respect, description, empathy, and empowerment. The nursing strategies included (1) developing a collaborative relationship with patients and families; (2) understanding patients' concerns and problems; (3) discussing potential solutions; (4) encouraging patients to take action; (5) bringing out emotional expression; (6) normalizing negative emotions; and (7) protecting hope. The findings of this study informed that supportive communication is a valuable tool for nurses to provide informational and emotional support to Chinese patients in order to enhance their adaptation to living with a permanent colostomy. Developing an operational manual to enhance supportive communication for patients with colostomy is suggested. PMID:27684635

  15. Requirements and approaches to adapting laser writers for fabrication of gray-scale masks

    NASA Astrophysics Data System (ADS)

    Korolkov, Victor P.; Shimansky, Ruslan; Poleshchuk, Alexander G.; Cherkashin, Vadim V.; Kharissov, Andrey A.; Denk, Dmitry

    2001-11-01

    The photolithography using gray-scale masks (GSM) with multilevel transmittance is now one of promising ways for manufacturing of high efficiency diffractive optical elements and microoptics. Such masks can be most effectively fabricated by laser or electron-beam writers on materials with a transmittance changing under influence of high-energy beams. The basic requirements for adaptation of existing and developed scanning laser writers are formulated. These systems create an image by continuous movement of a writing beam along one coordinate and overlapping of adjacent written tracks along another coordinate. Several problems must be solved at the GSM manufacturing: the calibration of the influence of the laser beam on a recording material without transferring the gray-scale structure into photoresist; the transmittance at the current exposed pixel depends on surrounding structures generated before recording of the current track and a character of the laser beam power modulation; essential increasing of the computed data in comparison with binary elements. The offered solutions are based on the results of investigations of the materials with variable transmittance (LDW-glass, a-Si film) and takes into account the specificity of diffractive blazed microstructures. The reduction of data amount for fabrication of multi-level DOEs is effectively performed using offered vector-gradient data format, which is based on piecewise-linear approximation of phase profile. The presented approaches to adaptation of laser writers are realized in software and hardware, and they allow to solve the basic problems of manufacturing GSMs.

  16. Analytic approach to co-evolving dynamics in complex networks: dissatisfied adaptive snowdrift game

    NASA Astrophysics Data System (ADS)

    Gräser, Oliver; Xu, Chen; Hui, P. M.

    2011-08-01

    We investigate the formulation of mean-field (MF) approaches for co-evolving dynamic model systems, focusing on the accuracy and validity of different schemes in closing MF equations. Within the context of a recently introduced co-evolutionary snowdrift game in which rational adaptive actions are driven by dissatisfaction in the payoff, we introduce a method to test the validity of closure schemes and analyse the shortcomings of previous schemes. A previous scheme suitable for adaptive epidemic models is shown to be invalid for the model studied here. A binomial-style closure scheme that significantly improves upon the previous schemes is introduced. Fixed-point analysis of the MF equations not only explains the numerical observed transition between a connected state with suppressed cooperation and a highly cooperative disconnected state, but also reveals a previously undetected connected state that exhibits the unusual behaviour of decreasing cooperation as the temptation for uncooperative action drops. We proposed a procedure for selecting proper initial conditions to realize the unusual state in numerical simulations. The effects of the mean number of connections that an agent carries are also studied.

  17. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  18. A computer adaptive testing approach for assessing physical functioning in children and adolescents.

    PubMed

    Haley, Stephen M; Ni, Pengsheng; Fragala-Pinkham, Maria A; Skrinar, Alison M; Corzo, Deyanira

    2005-02-01

    The purpose of this article is to demonstrate: (1) the accuracy and (2) the reduction in amount of time and effort in assessing physical functioning (self-care and mobility domains) of children and adolescents using computer-adaptive testing (CAT). A CAT algorithm selects questions directly tailored to the child's ability level, based on previous responses. Using a CAT algorithm, a simulation study was used to determine the number of items necessary to approximate the score of a full-length assessment. We built simulated CAT (5-, 10-, 15-, and 20-item versions) for self-care and mobility domains and tested their accuracy in a normative sample (n=373; 190 males, 183 females; mean age 6y 11mo [SD 4y 2m], range 4mo to 14y 11mo) and a sample of children and adolescents with Pompe disease (n=26; 21 males, 5 females; mean age 6y 1mo [SD 3y 10mo], range 5mo to 14y 10mo). Results indicated that comparable score estimates (based on computer simulations) to the full-length tests can be achieved in a 20-item CAT version for all age ranges and for normative and clinical samples. No more than 13 to 16% of the items in the full-length tests were needed for any one administration. These results support further consideration of using CAT programs for accurate and efficient clinical assessments of physical functioning.

  19. A unified association analysis approach for family and unrelated samples correcting for stratification.

    PubMed

    Zhu, Xiaofeng; Li, Shengchao; Cooper, Richard S; Elston, Robert C

    2008-02-01

    There are two common designs for association mapping of complex diseases: case-control and family-based designs. A case-control sample is more powerful to detect genetic effects than a family-based sample that contains the same numbers of affected and unaffected persons, although additional markers may be required to control for spurious association. When family and unrelated samples are available, statistical analyses are often performed in the family and unrelated samples separately, conditioning on parental information for the former, thus resulting in reduced power. In this report, we propose a unified approach that can incorporate both family and case-control samples and, provided the additional markers are available, at the same time corrects for population stratification. We apply the principal components of a marker matrix to adjust for the effect of population stratification. This unified approach makes it unnecessary to perform a conditional analysis of the family data and is more powerful than the separate analyses of unrelated and family samples, or a meta-analysis performed by combining the results of the usual separate analyses. This property is demonstrated in both a variety of simulation models and empirical data. The proposed approach can be equally applied to the analysis of both qualitative and quantitative traits.

  20. A margin based approach to determining sample sizes via tolerance bounds.

    SciTech Connect

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  1. An adaptive learning approach for 3-D surface reconstruction from point clouds.

    PubMed

    Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia

    2008-06-01

    In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.

  2. Event-driven approach of layered multicast to network adaptation in RED-based IP networks

    NASA Astrophysics Data System (ADS)

    Nahm, Kitae; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    In this work, we investigate the congestion control problem for layered video multicast in IP networks of active queue management (AQM) using a simple random early detection (RED) queue model. AQM support from networks improves the visual quality of video streaming but makes network adaptation more di+/-cult for existing layered video multicast proticols that use the event-driven timer-based approach. We perform a simplified analysis on the response of the RED algorithm to burst traffic. The analysis shows that the primary problem lies in the weak correlation between the network feedback and the actual network congestion status when the RED queue is driven by burst traffic. Finally, a design guideline of the layered multicast protocol is proposed to overcome this problem.

  3. Spin Adapted versus Broken Symmetry Approaches in the Description of Magnetic Coupling in Heterodinuclear Complexes.

    PubMed

    Costa, Ramon; Valero, Rosendo; Reta Mañeru, Daniel; Moreira, Ibério de P R; Illas, Francesc

    2015-03-10

    The performance of a series of wave function and density functional theory based methods in predicting the magnetic coupling constant of a family of heterodinuclear magnetic complexes has been studied. For the former, the accuracy is similar to other simple cases involving homodinuclear complexes, the main limitation being a sufficient inclusion of dynamical correlation effects. Nevertheless, these series of calculations provide an appropriate benchmark for density functional theory based methods. Here, the usual broken symmetry approach provides a convenient framework to predict the magnetic coupling constants but requires deriving the appropriate mapping. At variance with simple dinuclear complexes, spin projection based techniques cannot recover the corresponding (approximate) spin adapted solution. Present results also show that current implementation of spin flip techniques leads to unphysical results. PMID:26579753

  4. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the National Aeronautics and Space Administration (NASA) Gulfstream GIII testbed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with the Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight. A combination of industry and NASA standard practice require various structural analyses, ground testing, and health monitoring techniques for showing an airworthy structure. This paper provides an overview of compliant structures design, the structural ground testing leading up to flight, and the flight envelope expansion and monitoring strategy. Flight data will be presented, and lessons learned along the way will be highlighted.

  5. Improving satellite-retrieved surface radiative fluxes in polar regions using a smart sampling approach

    NASA Astrophysics Data System (ADS)

    Van Tricht, Kristof; Lhermitte, Stef; Gorodetskaya, Irina V.; van Lipzig, Nicole P. M.

    2016-10-01

    The surface energy budget (SEB) of polar regions is key to understanding the polar amplification of global climate change and its worldwide consequences. However, despite a growing network of ground-based automatic weather stations that measure the radiative components of the SEB, extensive areas remain where no ground-based observations are available. Satellite remote sensing has emerged as a potential solution to retrieve components of the SEB over remote areas, with radar and lidar aboard the CloudSat and CALIPSO satellites among the first to enable estimates of surface radiative long-wave (LW) and short-wave (SW) fluxes based on active cloud observations. However, due to the small swath footprints, combined with a return cycle of 16 days, questions arise as to how CloudSat and CALIPSO observations should be optimally sampled in order to retrieve representative fluxes for a given location. Here we present a smart sampling approach to retrieve downwelling surface radiative fluxes from CloudSat and CALIPSO observations for any given land-based point-of-interest (POI) in polar regions. The method comprises a spatial correction that allows the distance between the satellite footprint and the POI to be increased in order to raise the satellite sampling frequency. Sampling frequency is enhanced on average from only two unique satellite overpasses each month for limited-distance sampling < 10 km from the POI, to 35 satellite overpasses for the smart sampling approach. This reduces the root-mean-square errors on monthly mean flux estimates compared to ground-based measurements from 23 to 10 W m-2 (LW) and from 43 to 14 W m-2 (SW). The added value of the smart sampling approach is shown to be largest on finer temporal resolutions, where limited-distance sampling suffers from severely limited sampling frequencies. Finally, the methodology is illustrated for Pine Island Glacier (Antarctica) and the Greenland northern interior. Although few ground-based observations are

  6. Building the framework for climate change adaptation in the urban areas using participatory approach: the Czech Republic experience

    NASA Astrophysics Data System (ADS)

    Emmer, Adam; Hubatová, Marie; Lupač, Miroslav; Pondělíček, Michael; Šafařík, Miroslav; Šilhánková, Vladimíra; Vačkář, David

    2016-04-01

    The Czech Republic has experienced numerous extreme hydrometeorological / climatological events such as floods (significant ones in 1997, 2002, 2010, 2013), droughts (2013, 2015), heat waves (2015) and windstorms (2007) during past decades. These events are generally attributed to the ongoing climate change and caused loss of lives and significant material damages (up to several % of GDP in some years), especially in urban areas. To initiate the adaptation process of urban areas, the main objective was to prepare a framework for creating climate change adaptation strategies of individual cities reflecting physical-geographical and socioeconomical conditions of the Czech Republic. Three pilot cities (Hradec Králové, Žďár nad Sázavou, Dobru\\vska) were used to optimize entire procedure. Two sets of participatory seminars were organised in order to involve all key stakeholders (the city council, department of the environment, department of the crisis management, hydrometeorological institute, local experts, ...) into the process of creation of the adaptation strategy from its early stage. Lesson learned for the framework were related especially to its applicability on a local level, which is largely a matter of the understandability of the concept. Finally, this illustrative and widely applicable framework (so called 'road map to adaptation strategy') includes five steps: (i) analysis of existing strategies and plans on national, regional and local levels; (ii) analysing climate-change related hazards and key vulnerabilities; (iii) identification of adaptation needs, evaluation of existing adaptation capacity and formulation of future adaptation priorities; (iv) identification of limits and barriers for the adaptation (economical, environmental, ...); and (v) selection of specific types of adaptation measures reflecting identified adaptation needs and formulated adaptation priorities. Keywords: climate change adaptation (CCA); urban areas; participatory approach

  7. Application of ameliorative and adaptive approaches to revegetation of historic high altitude mining waste

    SciTech Connect

    Bellitto, M.W.; Williams, H.T.; Ward, J.N.

    1999-07-01

    High altitude, historic, gold and silver tailings deposits, which included a more recent cyanide heap leach operation, were decommissioned, detoxified, re-contoured and revegetated. Detoxification of the heap included rinsing with hydrogen peroxide, lime and ferric chloride, followed by evaporation and land application of remaining solution. Grading included the removal of solution ponds, construction of a geosynthetic/clay lined pond, heap removal and site drainage development. Ameliorative and adaptive revegetation methodologies were utilized. Revegetation was complicated by limited access, lack of topsoil, low pH and evaluated metals concentrations in the tailings, and a harsh climate. Water quality sampling results for the first year following revegetation, indicate reclamation activities have contributed to a decrease in metals and sediment loading to surface waters downgradient of the site. Procedures, methodologies and results, following the first year of vegetation growth, are provided.

  8. [The short version of Critical Care Family Needs Inventory (CCFNI): adaptation and validation for a spanish sample].

    PubMed

    Gómez-Martíinez, S; Arnal, R Ballester; Juliá, B Gil

    2011-01-01

    Relatives play an important role in the disease process of patients admitted to Intensive Care Units (ICU). It is therefore important to know the needs of people close to the patient in order to try to improve their adaption to a situation as difficult as an ICU admission. The aim of this study was the adaptation and validation of the short version of the Critical Care Family Needs Inventory (CCFNI) for a Spanish sample. The inventory was applied to 55 relatives of patients admitted to the ICU of the Hospital General de Castellón. After the removal of three items for different reasons, we performed an Exploratory Factor Analysis with the 11 remaining items to determine the factor structure of the questionnaire. We also made a descriptive analysis of the items, and internal consistency and construct validity were calculated through Cronbach's α and Pearson correlation coefficient respectively. The results of the principal components factor analysis using varimax rotation indicated a four-factor solution. These four factors corresponded to: medical attention to the patient, personal attention to the relatives, communication between the family and the doctor, and perceived improvements in the Unit. The short version of CCFNI showed good internal consistency for both the total scale and factors. The results suggest that the CCFNI is a suitable measure for assessing the different needs presented by the relatives of patients admitted to an Intensive Care Unit, showing adequate psychometric properties.

  9. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  10. Bioagent Sample Matching using Elemental Composition Data: an Approach to Validation

    SciTech Connect

    Velsko, S P

    2006-04-21

    Sample matching is a fundamental capability that can have high probative value in a forensic context if proper validation studies are performed. In this report we discuss the potential utility of using the elemental composition of two bioagent samples to decide if they were produced in the same batch, or by the same process. Using guidance from the recent NRC study of bullet lead analysis and other sources, we develop a basic likelihood ratio framework for evaluating the evidentiary weight of elemental analysis data for sample matching. We define an objective metric for comparing two samples, and propose a method for constructing an unbiased population of test samples. We illustrate the basic methodology with some existing data on dry Bacillus thuringiensis preparations, and outline a comprehensive plan for experimental validation of this approach.

  11. Using Past Data to Enhance Small Sample DIF Estimation: A Bayesian Approach

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Dorans, Neil J.; Grant, Mary C.; Blew, Edwin O.

    2009-01-01

    Test administrators often face the challenge of detecting differential item functioning (DIF) with samples of size smaller than that recommended by experts. A Bayesian approach can incorporate, in the form of a prior distribution, existing information on the inference problem at hand, which yields more stable estimation, especially for small…

  12. Stressful Situations at Work and in Private Life among Young Workers: An Event Sampling Approach

    ERIC Educational Resources Information Center

    Grebner, Simone; Elfering, Achim; Semmer, Norbert K.; Kaiser-Probst, Claudia; Schlapbach, Marie-Louise

    2004-01-01

    Most studies on occupational stress concentrate on chronic conditions, whereas research on stressful situations is rather sparse. Using an event-sampling approach, 80 young workers reported stressful events over 7 days (409 work-related and 127 private events). Content analysis showed the newcomers' work experiences to be similar to what is…

  13. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination.

  14. High precision Lu and Hf isotope analyses of both spiked and unspiked samples: A new approach

    NASA Astrophysics Data System (ADS)

    Lapen, Thomas J.; Mahlen, Nancy J.; Johnson, Clark M.; Beard, Brian L.

    2004-01-01

    The functional form of instrumentally produced mass fractionation associated with MC-ICP-MS analysis is not accurately known and therefore cannot be fully corrected by traditional approaches of internal normalization using power, linear, or exponential mass-bias laws. We present a method for robust correction of instrumentally produced mass-fractionation of both spiked and unspiked samples that can be applied to mass analysis of Hf as well as Nd, Sr, Os, etc. Correction of 176Hf/177Hf for unspiked samples follows a traditional approach of internal normalization using an exponential law, followed by normalization to a standard of known composition, such as JMC-475. For spiked samples, standards are used to characterize a linear instrumental mass-bias coefficient; the mass-bias coefficient is defined by the slope of a tie-line between measured and true values of a standard. This approximation results in identical precision and accuracy of measurements for spiked and unspiked samples (±0.005% 2σ, external reproducibility). The effects of the spike on the 176Hf/177Hf ratio and calculation of the molar spike-sample ratio is determined by a closed-form solution modified from the double-spike approach used for Fe isotope analysis by TIMS [Johnson and Beard, 1999]. The measured 176Lu/175Lu ratios are corrected by doping the sample with Er and using the 167Er/166Er ratio to externally normalize the 176Lu/175Lu ratio using an exponential law. Finally, spike-sample equilibration is confirmed for our sample dissolution protocol through analysis of varying physical mixtures of 1 Ga garnet and hornblende, where all the data lie on a mixing-line, within error, on a 176Lu/177Hf-176Hf/177Hf diagram. Precision of 176Lu/177Hf ratios is determined to be ±0.2% (2σ) for standards and for physical mixtures of garnet and hornblende.

  15. Adaptation of a speciation sampling cartridge for measuring ammonia flux from cattle feedlots using relaxed eddy accumulation

    NASA Astrophysics Data System (ADS)

    Baum, K. A.; Ham, J. M.

    Improved measurements of ammonia losses from cattle feedlots are needed to quantify the national NH 3 emissions inventory and evaluate management techniques for reducing emissions. Speciation cartridges composed of glass honeycomb denuders and filter packs were adapted to measure gaseous NH 3 and aerosol NH 4+ fluxes using relaxed eddy accumulation (REA). Laboratory testing showed that a cartridge equipped with four honeycomb denuders had a total capture capacity of 1800 μg of NH 3. In the field, a pair of cartridges was deployed adjacent to a sonic anemometer and an open-path gas analyzer on a mobile tower. High-speed valves were attached to the inlets of the cartridges and controlled by a datalogger so that up- and down-moving eddies were independently sampled based on direction of the vertical wind speed and a user-defined deadband. Air flowed continuously through the cartridges even when not sampling by means of a recirculating air handling system. Eddy covariance measurement of CO 2 and H 2O, as measured by the sonic and open-path gas analyzer, were used to determine the relaxation factor needed to compute REA-based fluxes. The REA system was field tested at the Beef Research Unit at Kansas State University in the summer and fall of 2007. Daytime NH 3 emissions ranged between 68 and 127 μg m -2 s -1; fluxes tended to follow a diurnal pattern correlated with latent heat flux. Daily fluxes of NH 3 were between 2.5 and 4.7 g m -2 d -1 and on average represented 38% of fed nitrogen. Aerosol NH 4+ fluxes were negligible compared with NH 3 emissions. An REA system designed around the high-capacity speciation cartridges can be used to measure NH 3 fluxes from cattle feedlots and other strong sources. The system could be adapted to measure fluxes of other gases and aerosols.

  16. Evaluation of PCR Approaches for Detection of Bartonella bacilliformis in Blood Samples

    PubMed Central

    Gomes, Cláudia; Martinez-Puchol, Sandra; Pons, Maria J.; Bazán, Jorge; Tinco, Carmen; del Valle, Juana; Ruiz, Joaquim

    2016-01-01

    Background The lack of an effective diagnostic tool for Carrion’s disease leads to misdiagnosis, wrong treatments and perpetuation of asymptomatic carriers living in endemic areas. Conventional PCR approaches have been reported as a diagnostic technique. However, the detection limit of these techniques is not clear as well as if its usefulness in low bacteriemia cases. The aim of this study was to evaluate the detection limit of 3 PCR approaches. Methodology/Principal Findings We determined the detection limit of 3 different PCR approaches: Bartonella-specific 16S rRNA, fla and its genes. We also evaluated the viability of dry blood spots to be used as a sample transport system. Our results show that 16S rRNA PCR is the approach with a lowest detection limit, 5 CFU/μL, and thus, the best diagnostic PCR tool studied. Dry blood spots diminish the sensitivity of the assay. Conclusions/Significance From the tested PCRs, the 16S rRNA PCR-approach is the best to be used in the direct blood detection of acute cases of Carrion’s disease. However its use in samples from dry blood spots results in easier management of transport samples in rural areas, a slight decrease in the sensitivity was observed. The usefulness to detect by PCR the presence of low-bacteriemic or asymptomatic carriers is doubtful, showing the need to search for new more sensible techniques. PMID:26959642

  17. Exceeding the limitations of watershed outlet sampling: Multi-approach analysis of synoptic streamwater chemistry data

    NASA Astrophysics Data System (ADS)

    McGlynn, Brian; Gardner, Kristin; Marshall, Lucy

    2010-05-01

    Catchment hydrology and biogeochemistry understanding has benefitted tremendously from improvements in temporal and spatial sampling frequency. However, we are just beginning to utilize the power of high-resolution time series from a few locations combined with high spatial density sampling often referred to as synoptic sampling. Coupling snapshot-in-time data across a landscape or stream network with high temporal frequency sampling at a few locations is not trivial and requires synthesis of methods, approaches, and research design with consideration for the strengths of empirical, statistical, and modeling methodologies. Here we present multi-approach analysis of hydrologic and streamwater chemistry data across a 200 km2 catchment to demonstrate the insights gained from each successive analysis approach. Our analyses include but are not limited to 1) examining spatial/temporal nutrient speciation and stoichiometry, 2) analyzing dual isotopes of nitrate (15N and 18O) for source apportionment, 3) spatial linear modeling, 4) quantifying seasonality in spatial statistics/patterns, 5) quantifying sub-catchment nutrient loading and retention (budgets), 6) analyzing landscape and land cover / use, and 7) modeling nutrient source apportionment while characterizing and constraining predictive uncertainty in a Bayesian framework. We seek to demonstrate how combining multiple sampling designs with the strengths of multiple analyses can lead to new insights into processes operating across space and time at the catchment scale and help to identify the internal factors controlling behavior observed at the catchment outlet.

  18. Adaptive fuzzy approach to modeling of operational space for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Musilek, Petr; Gupta, Madan M.

    1998-10-01

    Robots operating in an unstructured environment need high level of modeling of their operational space in order to plan a suitable path from an initial position to a desired goal. From this perspective, operational space modeling seems to be crucial to ensure a sufficient level of autonomy. In order to compile the information from various sources, we propose a fuzzy approach to evaluate each unit region on a grid map by a certain value of transition cost. This value expresses the cost of movement over the unit region: the higher the value, the more expensive the movement through the region in terms of energy, time, danger, etc. The approach for modeling, proposed in this paper, employs fuzzy granulation of information on various terrain features and their combination based on a fuzzy neural network. In order to adapt to the changing environmental conditions, and to improve the validity of constructed cost maps on-line, the system can be endowed with learning abilities. The learning subsystem would change parameters of the fuzzy neural network based decision system by reinforcements derived from comparisons of the actual cost of transition with the cost obtained from the model.

  19. An adaptive remaining energy prediction approach for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2016-02-01

    With the growing number of electric vehicle (EV) applications, the function of the battery management system (BMS) becomes more sophisticated. The accuracy of remaining energy estimation is critical for energy optimization and management in EVs. Therefore the state-of-energy (SoE) is defined to indicate the remaining available energy of the batteries. Considering that there are inevitable accumulated errors caused by current and voltage integral method, an adaptive SoE estimator is first established in this paper. In order to establish a reasonable battery equivalent model, based on the experimental data of the LiFePO4 battery, a data-driven model is established to describe the relationship between the open-circuit voltage (OCV) and the SoE. What is more, the forgetting factor recursive least-square (RLS) method is used for parameter identification to get accurate model parameters. Finally, in order to analyze the robustness and the accuracy of the proposed approach, different types of dynamic current profiles are conducted on the lithium-ion batteries and the performances are calculated and compared. The results indicate that the proposed approach has robust and accurate SoE estimation results under dynamic working conditions.

  20. An error function minimization approach for the inverse problem of adaptive mirrors tuning

    NASA Astrophysics Data System (ADS)

    Vannoni, Maurizio; Yang, Fan; Siewert, Frank; Sinn, Harald

    2014-09-01

    Adaptive x-ray optics are more and more used in synchrotron beamlines, and it is probable that they will be considered for the future high-power free-electron laser sources, as the European XFEL now under construction in Hamburg, or similar projects now in discussion. These facilities will deliver a high power x-ray beam, with an expected high heat load delivered on the optics. For this reason, bendable mirrors are required to actively compensate the resulting wavefront distortion. On top of that, the mirror could have also intrinsic surface defects, as polishing errors or mounting stresses. In order to be able to correct the mirror surface with a high precision to maintain its challenging requirements, the mirror surface is usually characterized with a high accuracy metrology to calculate the actuators pulse functions and to assess its initial shape. After that, singular value decomposition (SVD) is used to find the signals to be applied into the actuators, to reach the desired surface deformation or correction. But in some cases this approach could be not robust enough for the needed performance. We present here a comparison between the classical SVD method and an error function minimization based on root-mean-square calculation. Some examples are provided, using a simulation of the European XFEL mirrors design as a case of study, and performances of the algorithms are evaluated in order to reach the ultimate quality in different scenarios. The approach could be easily generalized to other situations as well.

  1. An adaptive level set approach for incompressible two-phase flows

    SciTech Connect

    Sussman, M.; Almgren, A.S.; Bell, J.B.

    1997-04-01

    In Sussman, Smereka and Osher, a numerical method using the level set approach was formulated for solving incompressible two-phase flow with surface tension. In the level set approach, the interface is represented as the zero level set of a smooth function; this has the effect of replacing the advection of density, which has steep gradients at the interface, with the advection of the level set function, which is smooth. In addition, the interface can merge or break up with no special treatment. The authors maintain the level set function as the signed distance from the interface in order to robustly compute flows with high density ratios and stiff surface tension effects. In this work, they couple the level set scheme to an adaptive projection method for the incompressible Navier-Stokes equations, in order to achieve higher resolution of the interface with a minimum of additional expense. They present two-dimensional axisymmetric and fully three-dimensional results of air bubble and water drop computations.

  2. Experimental Approach for Deep Proteome Measurements from Small-Scale Microbial Biomass Samples.

    SciTech Connect

    Thompson, Melissa R; Chourey, Karuna; Froelich, Jennifer M.; Erickson, Brian K; Verberkmoes, Nathan C; Hettich, Robert {Bob} L

    2008-01-01

    Many methods of microbial proteome characterizations require large quantities of cellular biomass (> 1-2 g) for sample preparation and protein identification. Our experimental approach differs from traditional techniques by providing the ability to identify the proteomic state of a microbe from a few milligrams of starting cellular material. The small-scale, guanidine-lysis method minimizes sample loss by achieving cellular lysis and protein digestion in a single tube experiment. For this experimental approach, the freshwater microbe Shewanella oneidensis MR-1 and the purple non-sulfur bacterium Rhodopseudomonas palustris CGA0010 were used as model organisms for technology development and evaluation. A 2-D LC-MS/MS comparison between a standard sonication lysis method and the small-scale guanidine-lysis techniques demonstrates that the guanidine-lysis method is more efficient with smaller sample amounts of cell pellet (i.e. down to 1 mg). The described methodology would enable deep proteome measurements from a few milliliters of confluent bacterial cultures. We also report a new protocol for efficient lysis from small amounts of natural biofilm samples for deep proteome measurements, which should greatly enhance the emerging field of microbial community proteomics. This straightforward sample boiling protocol is complementary to the small-scale guanidine-lysis technique, is amenable for small sample quantities, and requires no special reagents that might complicate the MS measurements.

  3. Technical note: An improved approach to determining background aerosol concentrations with PILS sampling on aircraft

    NASA Astrophysics Data System (ADS)

    Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.

    2016-07-01

    Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.

  4. Computational Improvements to Quantum Wave Packet ab Initio Molecular Dynamics Using a Potential-Adapted, Time-Dependent Deterministic Sampling Technique.

    PubMed

    Jakowski, Jacek; Sumner, Isaiah; Iyengar, Srinivasan S

    2006-09-01

    In a recent publication, we introduced a computational approach to treat the simultaneous dynamics of electrons and nuclei. The method is based on a synergy between quantum wave packet dynamics and ab initio molecular dynamics. Atom-centered density-matrix propagation or Born-Oppenheimer dynamics can be used to perform ab initio dynamics. In this paper, wave packet dynamics is conducted using a three-dimensional direct product implementation of the distributed approximating functional free-propagator. A fundamental computational difficulty in this approach is that the interaction potential between the two components of the methodology needs to be calculated frequently. Here, we overcome this problem through the use of a time-dependent deterministic sampling measure that predicts, at every step of the dynamics, regions of the potential which are important. The algorithm, when combined with an on-the-fly interpolation scheme, allows us to determine the quantum dynamical interaction potential and gradients at every dynamics step in an extremely efficient manner. Numerical demonstrations of our sampling algorithm are provided through several examples arranged in a cascading level of complexity. Starting from a simple one-dimensional quantum dynamical treatment of the shared proton in [Cl-H-Cl](-) and [CH3-H-Cl](-) along with simultaneous dynamical treatment of the electrons and classical nuclei, through a complete three-dimensional treatment of the shared proton in [Cl-H-Cl](-) as well as treatment of a hydrogen atom undergoing donor-acceptor transitions in the biological enzyme, soybean lipoxygenase-1 (SLO-1), we benchmark the algorithm thoroughly. Apart from computing various error estimates, we also compare vibrational density of states, inclusive of full quantum effects from the shared proton, using a novel unified velocity-velocity, flux-flux autocorrelation function. In all cases, the potential-adapted, time-dependent sampling procedure is seen to improve the

  5. A bioinformatics approach for determining sample identity from different lanes of high-throughput sequencing data.

    PubMed

    Goldfeder, Rachel L; Parker, Stephen C J; Ajay, Subramanian S; Ozel Abaan, Hatice; Margulies, Elliott H

    2011-01-01

    The ability to generate whole genome data is rapidly becoming commoditized. For example, a mammalian sized genome (∼3Gb) can now be sequenced using approximately ten lanes on an Illumina HiSeq 2000. Since lanes from different runs are often combined, verifying that each lane in a genome's build is from the same sample is an important quality control. We sought to address this issue in a post hoc bioinformatic manner, instead of using upstream sample or "barcode" modifications. We rely on the inherent small differences between any two individuals to show that genotype concordance rates can be effectively used to test if any two lanes of HiSeq 2000 data are from the same sample. As proof of principle, we use recent data from three different human samples generated on this platform. We show that the distributions of concordance rates are non-overlapping when comparing lanes from the same sample versus lanes from different samples. Our method proves to be robust even when different numbers of reads are analyzed. Finally, we provide a straightforward method for determining the gender of any given sample. Our results suggest that examining the concordance of detected genotypes from lanes purported to be from the same sample is a relatively simple approach for confirming that combined lanes of data are of the same identity and quality.

  6. A combined bottom-up and top-down approach for assessment of climate change adaptation options

    NASA Astrophysics Data System (ADS)

    Bhave, Ajay Gajanan; Mishra, Ashok; Raghuwanshi, Narendra Singh

    2014-10-01

    Focus of recent scientific research in the water sector has shifted from analysis of climate change impacts to assessment of climate change adaptation options. However, limited attention has been given to integration of bottom-up and top-down methods for assessment of adaptation options. The integrated approach used in this study uses hydrological modelling to assess the effect of stakeholder prioritized adaptation options for the Kangsabati river catchment in India. A series of 14 multi-level stakeholder consultations are used to ascertain locally relevant no-regret adaptation options using Multi-Criteria Analysis (MCA) and scenario analysis methods. A validated Water Evaluation And Planning (WEAP) model is then used to project the effect of three options; option 1 check dams (CD), option 2 increasing forest cover (IFC) and option 3 combined CD and IFC, on future (2021-2050) streamflow. High resolution (˜25 km) climatic projections from four Regional Climate Models (RCMs) and their ensemble based on the SRES A1B scenario for the mid-21st century period are used to force the WEAP model. Results indicate that although all three adaptation options reduce streamflow, in comparison with scenario without adaptation, their magnitude, temporal pattern and effect on high and low streamflows are different. Options 2 and 3 reduce streamflow percentage by an order of magnitude greater than option 1. These characteristics affect their ability to address key adaptation requirements and therefore, we find that IFC emerges as a hydrologically suitable adaptation option for the study area. Based on study results we also conclude that such an integrated approach is advantageous and is a valuable tool for locally relevant climate change adaptation policymaking.

  7. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity

  8. The Fate of Early Experience Following Developmental Change: Longitudinal Approaches to Individual Adaptation in Childhood.

    ERIC Educational Resources Information Center

    Sroufe, L. Alan; And Others

    1990-01-01

    Examined Bowlby's proposition that early experiences and the adaptations to which they give rise influence later development, even beyond the influence of current circumstances or very recent adaptation. Groups whose adaptation were similar during preschool years but consistently different earlier were defined and compared. Results supported…

  9. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    PubMed

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  10. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  11. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  12. Measurement of the Viscoelastic Properties of Damping Materials: Adaptation of the Wave Propagation Method to Test Samples of Short Length

    NASA Astrophysics Data System (ADS)

    LEMERLE, P.

    2002-02-01

    Wave propagation methods allow the deduction of the viscoelastic damping properties of materials from the waveform pattern of a transitory wave: the wave profile is recorded at two travel distances in a thin bar made of the medium studied. In the case of linear viscoelasticity, the characteristics of the material are deduced directly from the transfer function of the two pulses measured. From a theoretical point of view, these methods are of great interest as they bridge a gap between vibratory methods and ultrasonic methods, allowing results to be obtained in a frequency range covering one and a half to two decades in the audiometric range (20 Hz-20 kHz). However, they have not been used much in industrial applications due to the difficulty and cost involved in producing samples in the form of bars. This study shows how this type of method can be adapted to measuring the viscoelastic properties of damping materials using reduced size and common shaped samples such as end-stop buffers.

  13. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    SciTech Connect

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  14. Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4.

    PubMed

    Voet, Arnout R D; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y J

    2014-04-01

    The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.

  15. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-01

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model. PMID:27435388

  16. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-01

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model.

  17. Adaptive Function in Preschoolers in Relation to Developmental Delay and Diagnosis of Autism Spectrum Disorders: Insights from a Clinical Sample

    ERIC Educational Resources Information Center

    Milne, Susan L.; McDonald, Jenny L.; Comino, Elizabeth J.

    2013-01-01

    This study aims to explore the relationship between developmental ability, autism and adaptive skills in preschoolers. Adaptive function was assessed in 152 preschoolers with autism, with and without developmental delay, and without autism, with and without developmental delay. Their overall adaptive function, measured by the general adaptive…

  18. Analytical approaches for determination of bromine in sediment core samples by X-ray fluorescence spectrometry.

    PubMed

    Pashkova, Galina V; Aisueva, Tatyana S; Finkelshtein, Alexander L; Ivanov, Egor V; Shchetnikov, Alexander A

    2016-11-01

    Bromine has been recognized as a valuable indicator for paleoclimatic studies. Wavelength dispersive X-ray fluorescence (WDXRF) and total reflection X-ray fluorescence (TXRF) methods were applied to study the bromine distributions in lake sediment cores. Conventional WDXRF technique usually requires relatively large mass of a sediment sample and a set of calibration samples. Some analytical approaches were developed to apply WDXRF to small sediment core samples in the absence of adequate calibration samples with a known Br content. The mass of a sample to be analyzed was reduced up to 200-300mg and the internal standard method with correction using fundamental parameters was developed for Br quantification. TXRF technique based on the direct analysis of a solid suspension using 20mg of sediment sample by internal standard method was additionally tested. The accuracy of the WDXRF and TXRF techniques was assessed by the comparative analysis of reference materials of sediments, soil and biological samples. In general, good agreement was achieved between the reference values and the measured values. The detection limits of Br were 1mg/kg and 0.4mg/kg for WDXRF and TXRF respectively. The results of the Br determination obtained with different XRF techniques were comparable to each other and used for paleoclimatic reconstructions. PMID:27591627

  19. Analytical approaches for determination of bromine in sediment core samples by X-ray fluorescence spectrometry.

    PubMed

    Pashkova, Galina V; Aisueva, Tatyana S; Finkelshtein, Alexander L; Ivanov, Egor V; Shchetnikov, Alexander A

    2016-11-01

    Bromine has been recognized as a valuable indicator for paleoclimatic studies. Wavelength dispersive X-ray fluorescence (WDXRF) and total reflection X-ray fluorescence (TXRF) methods were applied to study the bromine distributions in lake sediment cores. Conventional WDXRF technique usually requires relatively large mass of a sediment sample and a set of calibration samples. Some analytical approaches were developed to apply WDXRF to small sediment core samples in the absence of adequate calibration samples with a known Br content. The mass of a sample to be analyzed was reduced up to 200-300mg and the internal standard method with correction using fundamental parameters was developed for Br quantification. TXRF technique based on the direct analysis of a solid suspension using 20mg of sediment sample by internal standard method was additionally tested. The accuracy of the WDXRF and TXRF techniques was assessed by the comparative analysis of reference materials of sediments, soil and biological samples. In general, good agreement was achieved between the reference values and the measured values. The detection limits of Br were 1mg/kg and 0.4mg/kg for WDXRF and TXRF respectively. The results of the Br determination obtained with different XRF techniques were comparable to each other and used for paleoclimatic reconstructions.

  20. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-05-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems.

  1. An efficient approach for Mars Sample Return using emerging commercial capabilities

    NASA Astrophysics Data System (ADS)

    Gonzales, Andrew A.; Stoker, Carol R.

    2016-06-01

    Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science (Squyres, 2011 [1]). This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the

  2. An Adaptive Management Approach for Summer Water Level Reductions on the Upper Mississippi River System

    USGS Publications Warehouse

    Johnson, B.L.; Barko, J.W.; Clevenstine, R.; Davis, M.; Galat, D.L.; Lubinski, S.J.; Nestler, J.M.

    2010-01-01

    The primary purpose of this report is to provide an adaptive management approach for learning more about summer water level reductions (drawdowns) as a management tool, including where and how drawdowns can be applied most effectively within the Upper Mississippi River System. The report reviews previous drawdowns conducted within the system and provides specific recommendations for learning more about the lesser known effects of drawdowns and how the outcomes can be influenced by different implementation strategies and local conditions. The knowledge gained can be used by managers to determine how best to implement drawdowns in different parts of the UMRS to help achieve management goals. The information and recommendations contained in the report are derived from results of previous drawdown projects, insights from regional disciplinary experts, and the experience of the authors in experimental design, modeling, and monitoring. Modeling is a critical part of adaptive management and can involve conceptual models, simulation models, and empirical models. In this report we present conceptual models that express current understanding regarding functioning of the UMRS as related to drawdowns and highlight interactions among key ecological components of the system. The models were developed within the constraints of drawdown timing, magnitude (depth), and spatial differences in effects (longitudinal and lateral) with attention to ecological processes affected by drawdowns. With input from regional experts we focused on the responses of vegetation, fish, mussels, other invertebrates, and birds. The conceptual models reflect current understanding about relations and interactions among system components, the expected strength of those interactions, potential responses of system components to drawdowns, likelihood of the response occurring, and key uncertainties that limit our ability to make accurate predictions of effects (Table 1, Fig. 4-10). Based on this current

  3. Comparing Stream DOC Fluxes from Sensor- and Sample-Based Approaches

    NASA Astrophysics Data System (ADS)

    Shanley, J. B.; Saraceno, J.; Aulenbach, B. T.; Mast, A.; Clow, D. W.; Hood, K.; Walker, J. F.; Murphy, S. F.; Torres-Sanchez, A.; Aiken, G.; McDowell, W. H.

    2015-12-01

    DOC transport by streamwater is a significant flux that does not consistently show up in ecosystem carbon budgets. In an effort to quantify stream DOC flux, we analyzed three to four years of high-frequency in situ fluorescing dissolved organic matter (FDOM) concentrations and turbidity measured by optical sensors at the five diverse forested and/or alpine headwater sites of the U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) program. FDOM serves as a proxy for DOC. We also took discrete samples over a range of hydrologic conditions, using both manual weekly and automated event-based sampling. After compensating FDOM for temperature effects and turbidity interference - which was successful even at the high-turbidity Luquillo, PR site -- we evaluated the DOC-FDOM relation based on discrete sample DOC analyses matched to corrected FDOM at the time of sampling. FDOM was a moderately robust predictor of DOC, with r2 from 0.60 to more than 0.95 among sites. We then formed continuous DOC time series by two independent approaches: (1) DOC predicted from FDOM; and (2) the composite method, based on modeled DOC from regression on stream discharge, season, air temperature, and time, forcing the model to observations and adjusting modeled concentrations between observations by linearly-interpolated model residuals. DOC flux from each approach was then computed directly as concentration times discharge. DOC fluxes based on the sensor approach were consistently greater than the sample-based approach. At Loch Vale, CO (2.5 years) and Panola Mountain GA (1 year), the difference was 5-17%. At Sleepers River, VT (3 years), preliminary differences were greater than 20%. The difference is driven by the highest events, but we are investigating these results further. We will also present comparisons from Luquillo, PR, and Allequash Creek, WI. The higher sensor-based DOC fluxes could result from their accuracy during hysteresis, which is difficult to model

  4. An integrated stochastic approach to the assessment of agricultural water demand and adaptation to water scarcity

    NASA Astrophysics Data System (ADS)

    Foster, T.; Butler, A. P.; McIntyre, N.

    2012-12-01

    Increasing water demands from growing populations coupled with changing water availability, for example due to climate change, are likely to increase water scarcity. Agriculture will be exposed to risk due to the importance of reliable water supplies as an input to crop production. To assess the efficiency of agricultural adaptation options requires a sound understanding of the relationship between crop growth and water application. However, most water resource planning models quantify agricultural water demand using highly simplified, temporally lumped estimated crop-water production functions (CWPFs). Such CWPFs fail to capture the biophysical complexities in crop-water relations and mischaracterise farmers ability to respond to water scarcity. Application of these models in policy analyses will be ineffective and may lead to unsustainable water policies. Crop simulation models provide an alternative means of defining the complex nature of the CWPF. Here we develop a daily water-limited crop model for this purpose. The model is based on the approach used in the FAO's AquaCrop model, balancing biophysical and computational complexities. We further develop the model by incorporating improved simulation routines to calculate the distribution of water through the soil profile. Consequently we obtain a more realistic representation of the soil water balance with concurrent improvements in the prediction of water-limited yield. We introduce a methodology to utilise this model for the generation of stochastic crop-water production functions (SCWPFs). This is achieved by running the model iteratively with both time series of climatic data and variable quantities of irrigation water, employing a realistic rule-based approach to farm irrigation scheduling. This methodology improves the representation of potential crop yields, capturing both the variable effects of water deficits on crop yield and the stochastic nature of the CWPF due to climatic variability. Application to

  5. A new calculation method adapted to the experimental conditions for determining samples γ-activities induced by 14 MeV neutrons

    NASA Astrophysics Data System (ADS)

    Rzama, A.; Erramli, H.; Misdaq, M. A.

    1994-09-01

    Induced gamma-activities of different disk shaped irradiated samples and standards with 14 MeV neutrons have been determined by using a Monte Carlo calculation method adapted to the experimental conditions. The self-absorption of the multienergetic emitted gamma rays has been taken into account in the final samples activities. The influence of the different activation parameters has been studied. Na, K, Cl and P contents in biological (red beet) samples have been determined.

  6. Stability evaluation and improvement of adaptive optics systems by using the Lyapunov stability approach

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Chen, Tao; Liu, Xin-yue; Lin, Xu-dong; Yang, Xiao-xia; Li, Hong-zhuang

    2016-02-01

    In this research, investigations on the closed-loop control stability of adaptive optics systems are conducted by using the Lyapunov approach. As an direct metric of the control stability, the error propagator includes the effects of both the integral gain and the influence matrix and is effective for control-stability evaluation. An experimental 97-element AO system is developed for the control-stability investigation, and the Southwell sensor-actuator configuration rather than the Fried geometry is adopted so as to suppress the potential waffle mode. Because filtering out small singular values of the influence matrix can be used to improve the control stability, the effect of the influence matrix and the effect of the integral gain are considered as a whole by using the error propagator. Then, the control stability of the AO system is evaluated for varying the integral gains and the number of filtered-out singular values. Afterwards, an analysis of the evaluations of the error propagator is made, and a conclusion can be drawn that the control stability can be improved by filtering out more singular values of the influence matrix when the integral gain is high. In other words, the error propagator is useful for trading off the bandwidth error and the fitting error of AO systems in a control-stability approach. Finally, a performance measurement of the experimental AO system is conducted when 13 smaller singular values of the influence matrix are filtered out, and the results show that filtering out a small fraction of the singular values has a minor influence on the performance of this AO system.

  7. The role of idiotypic interactions in the adaptive immune system: a belief-propagation approach

    NASA Astrophysics Data System (ADS)

    Bartolucci, Silvia; Mozeika, Alexander; Annibale, Alessia

    2016-08-01

    In this work we use belief-propagation techniques to study the equilibrium behaviour of a minimal model for the immune system comprising interacting T and B clones. We investigate the effect of the so-called idiotypic interactions among complementary B clones on the system’s activation. Our results show that B-B interactions increase the system’s resilience to noise, making clonal activation more stable, while increasing the cross-talk between different clones. We derive analytically the noise level at which a B clone gets activated, in the absence of cross-talk, and find that this increases with the strength of idiotypic interactions and with the number of T cells sending signals to the B clones. We also derive, analytically and numerically, via population dynamics, the critical line where clonal cross-talk arises. Our approach allows us to derive the B clone size distribution, which can be experimentally measured and gives important information about the adaptive immune system response to antigens and vaccination.

  8. The role of idiotypic interactions in the adaptive immune system: a belief-propagation approach

    NASA Astrophysics Data System (ADS)

    Bartolucci, Silvia; Mozeika, Alexander; Annibale, Alessia

    2016-08-01

    In this work we use belief-propagation techniques to study the equilibrium behaviour of a minimal model for the immune system comprising interacting T and B clones. We investigate the effect of the so-called idiotypic interactions among complementary B clones on the system’s activation. Our results show that B–B interactions increase the system’s resilience to noise, making clonal activation more stable, while increasing the cross-talk between different clones. We derive analytically the noise level at which a B clone gets activated, in the absence of cross-talk, and find that this increases with the strength of idiotypic interactions and with the number of T cells sending signals to the B clones. We also derive, analytically and numerically, via population dynamics, the critical line where clonal cross-talk arises. Our approach allows us to derive the B clone size distribution, which can be experimentally measured and gives important information about the adaptive immune system response to antigens and vaccination.

  9. A New Controller for PMSM Servo Drive Based on the Sliding Mode Approach with Parameter Adaptation

    NASA Astrophysics Data System (ADS)

    Gjini, Orges; Kaneko, Takayuki; Ohsawa, Hiroshi

    A novel controller based on the Sliding Mode (SM) approach is designed for controlling a permanent magnet synchronous motor (PMSM) in a servo drive. After analyzing the classical SM controller, changes are made in the controller design such that its performance is substantially improved. To improve the controller performance in steady state (zero error positioning) an integral block is added to the controller resulting in a new controller configuration, which we call Sliding Mode Integral (SMI) controller. The new controller is tuned based on the results from parameter identification of the motor and the working machine. To cope with model parameter variations, especially unpredictable friction changes, gain scheduling and fuzzy based adaptive techniques are used in the control algorithm. Experiments and simulations are carried out and their results show a high performance control. The new controller offers very good tracking; it is highly robust, reaches the final position very fast and has a large stall torque. Furthermore the application of the SM ensures reduction of the system order by one. For comparison, the new controller's performance is compared with that of a PI controller. From the experimental results it is obvious the superiority of the new proposed controller.

  10. Trickle-down evolution: an approach to getting major evolutionary adaptive changes into textbooks and curricula.

    PubMed

    Padian, Kevin

    2008-08-01

    Although contemporary high school and college textbooks of biology generally cover the principles and data of microevolution (genetic and populational change) and speciation rather well, coverage of what is known of the major changes in evolution (macroevolution), and how the evidence is understood is generally poor to nonexistent. It is critical to improve this because acceptance of evolution by the American public rests on the understanding of how we know what we know about the emergence of major new taxonomic groups, and about their adaptations, behaviors, and ecologies in geologic time. An efficient approach to this problem is to improve the illustrations in college textbooks to show the consilience of different lines of fossil, morphological, and molecular evidence mapped on phylogenies. Such "evograms" will markedly improve traditional illustrations of phylogenies, "menageries," and "companatomies." If "evograms" are installed at the college level, the basic principles and evidence of macroevolution will be more likely taught in K-12, thus providing an essential missing piece in biological education. PMID:21669782

  11. Estimating oxygen consumption from heart rate using adaptive neuro-fuzzy inference system and analytical approaches.

    PubMed

    Kolus, Ahmet; Dubé, Philippe-Antoine; Imbeau, Daniel; Labib, Richard; Dubeau, Denise

    2014-11-01

    In new approaches based on adaptive neuro-fuzzy systems (ANFIS) and analytical method, heart rate (HR) measurements were used to estimate oxygen consumption (VO2). Thirty-five participants performed Meyer and Flenghi's step-test (eight of which performed regeneration release work), during which heart rate and oxygen consumption were measured. Two individualized models and a General ANFIS model that does not require individual calibration were developed. Results indicated the superior precision achieved with individualized ANFIS modelling (RMSE = 1.0 and 2.8 ml/kg min in laboratory and field, respectively). The analytical model outperformed the traditional linear calibration and Flex-HR methods with field data. The General ANFIS model's estimates of VO2 were not significantly different from actual field VO2 measurements (RMSE = 3.5 ml/kg min). With its ease of use and low implementation cost, the General ANFIS model shows potential to replace any of the traditional individualized methods for VO2 estimation from HR data collected in the field. PMID:24793823

  12. Comparing catchment sediment fingerprinting procedures using an auto-evaluation approach with virtual sample mixtures.

    PubMed

    Palazón, Leticia; Latorre, Borja; Gaspar, Leticia; Blake, William H; Smith, Hugh G; Navas, Ana

    2015-11-01

    Information on sediment sources in river catchments is required for effective sediment control strategies, to understand sediment, nutrient and pollutant transport, and for developing soil erosion management plans. Sediment fingerprinting procedures are employed to quantify sediment source contributions and have become a widely used tool. As fingerprinting procedures are naturally variable and locally dependant, there are different applications of the procedure. Here, the auto-evaluation of different fingerprinting procedures using virtual sample mixtures is proposed to support the selection of the fingerprinting procedure with the best capacity for source discrimination and apportionment. Surface samples from four land uses from a Central Spanish Pyrenean catchment were used i) as sources to generate the virtual sample mixtures and ii) to characterise the sources for the fingerprinting procedures. The auto-evaluation approach involved comparing fingerprinting procedures based on four optimum composite fingerprints selected by three statistical tests, three source characterisations (mean, median and corrected mean) and two types of objective functions for the mixing model. A total of 24 fingerprinting procedures were assessed by this new approach which were solved by Monte Carlo simulations and compared using the root mean squared error (RMSE) between known and assessed source ascriptions for the virtual sample mixtures. It was found that the source ascriptions with the highest accuracy were achieved using the corrected mean source characterisations for the composite fingerprints selected by the Kruskal Wallis H-test and principal components analysis. Based on the RMSE results, high goodness of fit (GOF) values were not always indicative of accurate source apportionment results, and care should be taken when using GOF to assess mixing model performance. The proposed approach to test different fingerprinting procedures using virtual sample mixtures provides an

  13. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data

    PubMed Central

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays. PMID:27403564

  14. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data.

    PubMed

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays. PMID:27403564

  15. Depth Analogy: Data-Driven Approach for Single Image Depth Estimation Using Gradient Samples.

    PubMed

    Choi, Sunghwan; Min, Dongbo; Ham, Bumsub; Kim, Youngjung; Oh, Changjae; Sohn, Kwanghoon

    2015-12-01

    Inferring scene depth from a single monocular image is a highly ill-posed problem in computer vision. This paper presents a new gradient-domain approach, called depth analogy, that makes use of analogy as a means for synthesizing a target depth field, when a collection of RGB-D image pairs is given as training data. Specifically, the proposed method employs a non-parametric learning process that creates an analogous depth field by sampling reliable depth gradients using visual correspondence established on training image pairs. Unlike existing data-driven approaches that directly select depth values from training data, our framework transfers depth gradients as reconstruction cues, which are then integrated by the Poisson reconstruction. The performance of most conventional approaches relies heavily on the training RGB-D data used in the process, and such a dependency severely degenerates the quality of reconstructed depth maps when the desired depth distribution of an input image is quite different from that of the training data, e.g., outdoor versus indoor scenes. Our key observation is that using depth gradients in the reconstruction is less sensitive to scene characteristics, providing better cues for depth recovery. Thus, our gradient-domain approach can support a great variety of training range datasets that involve substantial appearance and geometric variations. The experimental results demonstrate that our (depth) gradient-domain approach outperforms existing data-driven approaches directly working on depth domain, even when only uncorrelated training datasets are available. PMID:26529766

  16. An Adaptive Approach to Family Intervention: Linking Engagement in Family-Centered Intervention to Reductions in Adolescent Problem Behavior

    ERIC Educational Resources Information Center

    Connell, Arin M.; Dishion, Thomas J.; Yasui, Miwa; Kavanagh, Kathryn

    2007-01-01

    This study used Complier Average Causal Effect analysis (CACE; see G. Imbens & D. Rubin, 1997) to examine the impact of an adaptive approach to family intervention in the public schools on rates of substance use and antisocial behavior among students ages 11-17. Students were randomly assigned to a family-centered intervention (N = 998) in 6th…

  17. Approach to molecular characterization of partially and completely untyped samples in an Indian rotavirus surveillance program.

    PubMed

    Babji, Sudhir; Arumugam, Rajesh; Sarvanabhavan, Anuradha; Gentsch, Jon R; Kang, Gagandeep

    2014-08-11

    Surveillance networks for rotavirus document the burden of the disease using the proportion of children hospitalized with gastroenteritis positive for rotavirus by enzyme immunoassay. They also describe genotypes of circulating viruses by polymerase chain reaction for the VP7 and VP4 genes, which determine G and P types, respectively. A proportion of samples cannot be genotyped based on initial testing and laboratories need to assess further testing strategies based on resources and feasibility. To 365 samples obtained from an Indian rotavirus strain surveillance program, we applied an approach to determine the G and P types in antigen positive samples that failed to type initially with the standard laboratory protocol. Fifty-eight samples (19%) were negative for the VP6 gene, indicating that the antigen test was likely to have been false positive. Alternative extraction and priming approaches resulted in the identification of G and P types for 264 strains. The identity of one strain was determined by sequencing the first-round amplicons. Thirty-five strains were partially typed and seven strains could not be typed at all. The distribution of G and P types among strains that had initially failed to type, except one strain, did not differ from that in strains that were typed using the standard laboratory protocol.

  18. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  19. Enhanced contrast separation in scanning electron microscopes via a suspended-thin sample approach.

    PubMed

    Ji, Yuan; Wang, Li; Guo, Zhenxi; Wei, Bin; Zhao, Jie; Wang, Xiaodong; Zhang, Yinqi; Sui, Manling; Han, Xiaodong

    2014-11-01

    A suspended-thin-sample (STS) approach for signal selection and contrast separation is developed in scanning electron microscopes with commonly used primary beam energies and traditional detectors. Topography contrast, electron channeling contrast and composition contrast are separated and largely enhanced from suspended thin samples of several hundred nanometers in thickness, which is less than the escape depth of backscattered electrons. This imaging technique enables to detect relatively pure secondary electron and elastic backscattered electron singles, whereas suppress multiple inelastic scattering effects. The provided contrast features are different from those of bulk samples, which are largely mixed with inelastic scattering effects. The STS imaging concept and method could be expected to have more applications in distinguishing materials of nanostructures, multilayers, compounds and composites, as well as in SEM-based electron backscatter diffraction, cathodoluminesence, and x-ray microanalysis.

  20. An approach to area sampling and analysis for total isocyanates in workplace air.

    PubMed

    Key-Schwartz, R J; Tucker, S P

    1999-01-01

    An approach to sampling and analysis for total isocyanates (monomer plus any associated oligomers of a given isocyanate) in workplace air has been developed and evaluated. Based on a method developed by the Occupational Health Laboratory, Ontario Ministry of Labour, Ontario, Canada, isocyanates present in air are derivatized with a fluorescent reagent, tryptamine, in an impinger and subsequently analyzed via high-performance liquid chromatography (HPLC) with fluorescence detection. Excitation and emission wavelengths are set at 275 and 320 nm, respectively. A modification to the Ontario method was made in the replacement of the recommended impinger solvents (acetonitrile and 2,2,4-trimethylpentane) with dimethyl sulfoxide (DMSO). DMSO has the advantages of being compatible with reversedphase HPLC and not evaporating during sampling, as do the more volatile solvents used in the Ontario method. DMSO also may dissolve aerosol particles more efficiently during sampling than relatively nonpolar solvents. Several formulations containing diisocyanate prepolymers have been tested with this method in the laboratory. This method has been issued as National Institute for Occupational Safety and Health (NIOSH) Method 5522 in the first supplement to the fourth edition of the NIOSH Manual of Analytical Methods. This method is recommended for area sampling only due to possible hazards from contact with DMSO solutions containing isocyanate derivatives. The limits of detection are 0.1 microgram/sample for 2,4-toluene diisocyanate, 0.2 microgram/sample for 2,6-toluene diisocyanate, 0.3 microgram/sample for methylene bisphenyl diisocyanate, and 0.2 microgram/sample for 1,6-hexamethylene diisocyanate.

  1. A comprehensive approach to the determination of two benzimidazoles in environmental samples.

    PubMed

    Wagil, Marta; Maszkowska, Joanna; Białk-Bielińska, Anna; Stepnowski, Piotr; Kumirska, Jolanta

    2015-01-01

    Among the various pharmaceuticals regarded as emerging pollutants, benzimidazoles--represented by flubendazole and fenbendazole--are of particular concern because of their large-scale use in veterinary medicine and their health effects on aquatic organisms. For this reason, it is essential to have reliable analytical methods which can be used to simultaneously monitor their appearance in environmental matrices such as water, sediment and tissue samples. To date, however, such methods relating to these three matrices have not been available. In this paper we present a comprehensive approach to the determination of both drugs in the mentioned above matrices using liquid chromatography-ion trap mass spectrometry (LC-MS/MS). Special attention was paid to the sample preparation step. The optimal extraction methods were further validated by experiments with spiked water, sediment and fish tissue samples. Matrix effects were established. The following absolute recoveries of flubendazole and fenbendazole were achieved: 96.2% and 95.4% from waters, 103.4% and 98.3% from sediments, and 98.3% and 97.6% from fish tissue samples, respectively. Validation of the LC-MS/MS methods enable flubendazole and fenbendazole to be determined with method detection limits: 1.6 ng L(-1) and 1.7 ng L(-1) in water samples; 0.3 ng g(-1) for both compounds in sediment samples, and 3.3 ng g(-1) and 3.5 ng g(-1) in tissue samples, respectively. The proposed methods were successfully used for analysing selected pharmaceuticals in real samples collected in northern Poland. There is first data on the concentration in the environment of the target compounds in Poland.

  2. Bioassessment Tools for Stony Corals: Monitoring Approaches and Proposed Sampling Plan for the U.S. Virgin Islands

    EPA Science Inventory

    This document describes three general approaches to the design of a sampling plan for biological monitoring of coral reefs. Status assessment, trend detection and targeted monitoring each require a different approach to site selection and statistical analysis. For status assessm...

  3. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  4. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  5. Protein phosphorylation analysis in archival clinical cancer samples by shotgun and targeted proteomics approaches.

    PubMed

    Gámez-Pozo, Angelo; Sánchez-Navarro, Iker; Calvo, Enrique; Díaz, Esther; Miguel-Martín, María; López, Rocío; Agulló, Teresa; Camafeita, Emilio; Espinosa, Enrique; López, Juan Antonio; Nistal, Manuel; Vara, Juan Ángel Fresno

    2011-08-01

    Protein phosphorylation affects most eukaryotic cellular processes and its deregulation is considered a hallmark of cancer and other diseases. Phosphoproteomics may enable monitoring of altered signaling pathways as a means of stratifying tumors and facilitating the discovery of new drugs. Unfortunately, the development of molecular tests for clinical use is constrained by the limited availability of fresh frozen, clinically annotated samples. Here we report phosphopeptide analysis in human archival formalin-fixed, paraffin-embedded (FFPE) cancer samples based on immobilized metal affinity chromatography followed by liquid chromatography coupled with tandem mass spectrometry and selected reaction monitoring techniques. Our results indicate the equivalence of detectable phosphorylation rates in archival FFPE and fresh frozen tissues. Moreover, we demonstrate the applicability of targeted assays for phosphopeptide analysis in clinical archival FFPE samples, using an experimental workflow suitable for processing and analyzing large sample series. This work paves the way for the application of shotgun and targeted phosphoproteomics approaches in clinically relevant studies using archival clinical samples.

  6. Protein phosphorylation analysis in archival clinical cancer samples by shotgun and targeted proteomics approaches.

    PubMed

    Gámez-Pozo, Angelo; Sánchez-Navarro, Iker; Calvo, Enrique; Díaz, Esther; Miguel-Martín, María; López, Rocío; Agulló, Teresa; Camafeita, Emilio; Espinosa, Enrique; López, Juan Antonio; Nistal, Manuel; Vara, Juan Ángel Fresno

    2011-08-01

    Protein phosphorylation affects most eukaryotic cellular processes and its deregulation is considered a hallmark of cancer and other diseases. Phosphoproteomics may enable monitoring of altered signaling pathways as a means of stratifying tumors and facilitating the discovery of new drugs. Unfortunately, the development of molecular tests for clinical use is constrained by the limited availability of fresh frozen, clinically annotated samples. Here we report phosphopeptide analysis in human archival formalin-fixed, paraffin-embedded (FFPE) cancer samples based on immobilized metal affinity chromatography followed by liquid chromatography coupled with tandem mass spectrometry and selected reaction monitoring techniques. Our results indicate the equivalence of detectable phosphorylation rates in archival FFPE and fresh frozen tissues. Moreover, we demonstrate the applicability of targeted assays for phosphopeptide analysis in clinical archival FFPE samples, using an experimental workflow suitable for processing and analyzing large sample series. This work paves the way for the application of shotgun and targeted phosphoproteomics approaches in clinically relevant studies using archival clinical samples. PMID:21617801

  7. A novel approach to process carbonate samples for radiocarbon measurements with helium carrier gas

    NASA Astrophysics Data System (ADS)

    Wacker, L.; Fülöp, R.-H.; Hajdas, I.; Molnár, M.; Rethemeyer, J.

    2013-01-01

    Most laboratories prepare carbonates samples for radiocarbon analysis by acid decomposition in evacuated glass tubes and subsequent reduction of the evolved CO2 to graphite in self-made reduction manifolds. This process is time consuming and labor intensive. In this work, we have tested a new approach for the preparation of carbonate samples, where any high-vacuum system is avoided and helium is used as a carrier gas. The liberation of CO2 from carbonates with phosphoric acid is performed in a similar way as it is often done in stable isotope ratio mass spectrometry where CO2 is released with acid in septum sealed tube under helium atmosphere. The formed CO2 is later flushed in a helium flow by means of a double-walled needle mounted from the tubes to the zeolite trap of the automated graphitization equipment (AGE). It essentially replaces the elemental analyzer normally used for the combustion of organic samples. The process can be fully automated from sampling the released CO2 in the septum-sealed tubes with a commercially available auto-sampler to the graphitization with the automated graphitization. The new method yields in low sample blanks of about 50000 years. Results of processed reference materials (IAEA-C2, FIRI-C) are in agreement with their consensus values.

  8. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data.

    PubMed

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-08-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method.

  9. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-08-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method.

  10. Student Approaches to Learning in Physics--Validity and Exploration Using Adapted SPQ

    ERIC Educational Resources Information Center

    Sharma, Manjula Devi; Stewart, Chris; Wilson, Rachel; Gokalp, Muhammed Sait

    2013-01-01

    The aim of this study was to investigate an adaptation of the Study Processes Questionnaire for the discipline of physics. A total of 2030 first year physics students at an Australian metropolitan university completed the questionnaire over three different year cohorts. The resultant data has been used to explore whether the adaptation of the…

  11. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  12. Development and Climate Change: A Mainstreaming Approach for Assessing Economic, Social, and Environmental Impacts of Adaptation Measures

    NASA Astrophysics Data System (ADS)

    Halsnæs, Kirsten; Trærup, Sara

    2009-05-01

    The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.

  13. Development and climate change: a mainstreaming approach for assessing economic, social, and environmental impacts of adaptation measures.

    PubMed

    Halsnaes, Kirsten; Traerup, Sara

    2009-05-01

    The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.

  14. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.

  15. A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications

    NASA Technical Reports Server (NTRS)

    Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco

    2015-01-01

    This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.

  16. A Bayesian approach for the estimation of patient compliance based on the last sampling information.

    PubMed

    Barrière, Olivier; Li, Jun; Nekka, Fahima

    2011-06-01

    Poor adherence to a drug prescription significantly impacts on the efficacy and safety of a planned therapy. The relationship between drug intake and pharmacokinetics (PK) is only partially known. In this work, we focus on the so-called "inverse problem", concerned with the issue of retracing the patient compliance scenario using limited clinical knowledge. Using a reported Pop-PK model of imatinib, and accounting for the variability around its PK parameters, we were able to simulate a whole range of drug concentration values at a specific sampling point for a population of patients with all possible drug compliance profiles. Using a Bayesian decision rule, we developed a methodology for the determination of the associated compliance profile prior to a given sampling value. The adopted approach allows, for the first time, to quantitatively acquire knowledge about the compliance patterns having a causal effect on a given PK. Moreover, using a simulation approach, we were able to evaluate the evolution of success rate of the retracing process in terms of the considered time period before sampling as well as the model-inherited variability. In conclusion, this work allows, from a probability viewpoint, to propose a solution for this inverse problem of compliance determination. PMID:21445612

  17. Reducing Uncertainty In Ecosystem Structure Inventories From Spaceborne Lidar Using Alternate Spatial Sampling Approaches

    NASA Astrophysics Data System (ADS)

    Lefsky, M. A.; Ramond, T.; Weimer, C. S.

    2010-12-01

    Current and proposed spaceborne lidar sensors sample the land surface using observations along transects in which consecutive observations in the along-track dimension are either contiguous (e.g. VCL, DESDynI, Livex) or spaced (ICESat). These sampling patterns are inefficient because multiple observations are made of a spatially autocorrelated phenomenon (i.e. vegetation patches) while large areas of the landscape are left un-sampled. This results in higher uncertainty in estimates of average ecosystem structure than would be obtained using either random sampling or sampling in regular grids. We compared three sampling scenarios for spaceborne lidar: five transects spaced every 850 m across-track with contiguous 25m footprints along-track, the same number of footprints distributed randomly, and a hybrid approach that retains the central transect of contiguous 25m footprints and distributes the remainder of the footprints into a grid with 178 m spacing. We used simulated ground tracks at four latitudes for a realistic spaceborne lidar mission and calculated the amount of time required to achieve 150 m spacing between transects and the number of near-coincident observations for each scenario. We used four lidar height datasets collected using the Laser Vegetation Imaging Sensor (La Selva, Costa Rica, Sierra Nevada, California, Duke Forest, North Carolina and Harvard Forest, Massachusetts) to calculate the standard error of estimates of landscape height for each scenario. We found that a hybrid sampling approach reduced the amount of time required to reach a transect spacing of 150 m by a factor of three at all four latitudes, and that the number of near-coincident observations was greater by a factor of five at the equator and at least equal throughout the range of latitudes sampled. The standard error of landscape height was between 2 and 2.5 times smaller using either hybrid or random sampling than using transect sampling. As the pulses generated by a spaceborne

  18. A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture.

    PubMed

    Wong, H S; Guan, L

    2001-01-01

    We address the problem of adaptive regularization in image restoration by adopting a neural-network learning approach. Instead of explicitly specifying the local regularization parameter values, they are regarded as network weights which are then modified through the supply of appropriate training examples. The desired response of the network is in the form of a gray level value estimate of the current pixel using weighted order statistic (WOS) filter. However, instead of replacing the previous value with this estimate, this is used to modify the network weights, or equivalently, the regularization parameters such that the restored gray level value produced by the network is closer to this desired response. In this way, the single WOS estimation scheme can allow appropriate parameter values to emerge under different noise conditions, rather than requiring their explicit selection in each occasion. In addition, we also consider the separate regularization of edges and textures due to their different noise masking capabilities. This in turn requires discriminating between these two feature types. Due to the inability of conventional local variance measures to distinguish these two high variance features, we propose the new edge-texture characterization (ETC) measure which performs this discrimination based on a scalar value only. This is then incorporated into a fuzzified form of the previous neural network which determines the degree of membership of each high variance pixel in two fuzzy sets, the EDGE and TEXTURE fuzzy sets, from the local ETC value, and then evaluates the appropriate regularization parameter by appropriately combining these two membership function values.

  19. a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2013-02-01

    In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.

  20. Diagnosing Intellectual Disability in a Forensic Sample: Gender and Age Effects on the Relationship between Cognitive and Adaptive Functioning

    ERIC Educational Resources Information Center

    Hayes, Susan C.

    2005-01-01

    Background: The relationship between adaptive behaviour and cognitive functioning in offenders with intellectual disabilities is not well researched. This study aims to examine gender and age effects on the relationship between these two areas of functioning. Method: The "Vineland Adaptive Behavior Scales" (VABS) and the "Kaufman Brief…

  1. Developing an Instructional Material Using a Concept Cartoon Adapted to the 5E Model: A Sample of Teaching Erosion

    ERIC Educational Resources Information Center

    Birisci, Salih; Metin, Mustafa

    2010-01-01

    Using different instructional materials adapted within the constructivist learning theory will enhance students' conceptual understanding. From this point of view, an instructional instrument using a concept cartoon adapted with 5E model has developed and introduced in this study. The study has some deficiencies in investigating students'…

  2. Micro-TLC Approach for Fast Screening of Environmental Samples Derived from Surface and Sewage Waters.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Włodarczyk, Elżbieta; Baran, Michał J

    2013-01-01

    In this work we demonstrated analytical capability of micro-planar (micro-TLC) technique comprising one and two-dimensional (2D) separation modes to generate fingerprints of environmental samples originated from sewage and ecosystems waters. We showed that elaborated separation and detection protocols are complementary to previously invented HPLC method based on temperature-dependent inclusion chromatography and UV-DAD detection. Presented 1D and 2D micro-TLC chromatograms of SPE (solid-phase extraction) extracts were optimized for fast and low-cost screening of water samples collected from lakes and rivers located in the area of Middle Pomerania in northern part of Poland. Moreover, we studied highly organic compounds loaded in the treated and untreated sewage waters obtained from municipal wastewater treatment plant "Jamno" near Koszalin City (Poland). Analyzed environmental samples contained number of substances characterized by polarity range from estetrol to progesterone as well as chlorophyll-related dyes previously isolated and pre-purified by simple SPE protocol involving C18 cartridges. Optimization of micro-TLC separation and quantification protocols of such samples were discussed from the practical point of view using simple separation efficiency criteria including total peaks number, log(product ΔhR F), signal intensity and peak asymmetry. Outcomes of the presented analytical approach, especially using detection involving direct fluorescence (UV366/Vis) and phosphomolybdic acid (PMA) visualization are compared with UV-DAD HPLC-generated data reported previously. Chemometric investigation based on principal components analysis revealed that SPE extracts separated by micro-TLC and detected under fluorescence and PMA visualization modes can be used for robust sample fingerprinting even after long-term storage of the extracts (up to 4 years) at subambient temperature (-20 °C). Such approach allows characterization of wide range of sample components that

  3. Integrating evolutionary and functional approaches to infer adaptation at specific loci.

    PubMed

    Storz, Jay F; Wheat, Christopher W

    2010-09-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally,population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation.

  4. Spatiotemporal reconstruction of gaps in multivariate fields using the direct sampling approach

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; McCabe, Matthew F.; Renard, Philippe

    2012-10-01

    The development of spatially continuous fields from sparse observing networks is an outstanding problem in the environmental and Earth sciences. Here we explore an approach to produce spatially continuous fields from discontinuous data that focuses on reconstructing gaps routinely present in satellite-based Earth observations. To assess the utility of the approach, we use synthetic imagery derived from a regional climate model of southeastern Australia. Orbital tracks, scan geometry influences, and atmospheric artifacts are artificially imposed upon these model simulations to examine the techniques' capacity to reproduce realistic and representative retrievals. The imposed discontinuities are reconstructed using a direct sampling technique and are compared against the original continuous model data: a synthetic simulation experiment. Results indicate that the multipoint geostatistical gap-filling approach produces texturally realistic spatially continuous fields from otherwise discontinuous data sets. Reconstruction results are assessed through comparison of spatial distributions, as well as through visual assessment of fine-scale features. Complex spatial patterns and fine-scale structure can be resolved within the reconstructions, illustrating that the often nonlinear dependencies between variables can be maintained. The stochastic nature of the methodology makes it possible to expand the approach within a Monte Carlo framework in order to estimate the uncertainty related to subsequent reconstructions. From a practical perspective, the reconstruction method is straightforward and requires minimum user intervention for parameter adjustment. As such, it can be automated to systematically process real time remote sensing measurements.

  5. A Hierarchical Distance Sampling Approach to Estimating Mortality Rates from Opportunistic Carcass Surveillance Data

    PubMed Central

    Bellan, Steve E.; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M.

    2012-01-01

    Summary Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data. PMID:24224079

  6. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  7. Molecular identification of cryptic bumblebee species from degraded samples using PCR-RFLP approach.

    PubMed

    Vesterlund, S-R; Sorvari, J; Vasemägi, A

    2014-01-01

    The worldwide decline and local extinctions of bumblebees have raised a need for fast and accurate tools for species identification. Morphological characters are often not sufficient, and molecular methods have been increasingly used for reliable identification of bumblebee species. Molecular methods often require high-quality DNA which makes them less suitable for analysis of low-quality or older samples. We modified the PCR-RFLP protocol for an efficient and cost-effective identification of four bumblebee species in the subgenus Bombus s. str. (B. lucorum, B. terrestris, B. magnus and B. cryptarum). We used a short partial mitochondrial COI fragment (446 bp) and three diagnostic restriction enzymes (Hinf I, Hinc II and Hae III) to identify species from degraded DNA material. This approach allowed us to efficiently determine the correct species from all degraded DNA samples, while only a subset of samples 64.6% (31 of 48) resulted in successful amplification of a longer COI fragment (1064 bp) using the previously described method. This protocol can be applied for conservation and management of bumblebees within this subgenus and is especially useful for fast species identification from degraded samples.

  8. Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach.

    PubMed

    Kelley, Ken

    2007-11-01

    The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95 confidence interval width will be no wider than to units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with somespecified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.

  9. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    PubMed Central

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  10. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    PubMed

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  11. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    PubMed

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-12-26

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  12. New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji

    Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.

  13. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  14. Novel extraction approach for liquid samples: stir cake sorptive extraction using monolith.

    PubMed

    Huang, Xiaojia; Chen, Linli; Lin, Fuhua; Yuan, Dongxing

    2011-08-01

    In this study, a new extraction approach for liquid samples--stir cake sorptive extraction using monoliths as extractive medium was developed. The preparation procedure of stir cake is very simple. First, monolithic cake is synthesized according to the in situ polymerization of monolith; then, the cake is inserted in an original unit (holder), which is constructed from a syringe cartridge and allows the magnetic stirring of the cake during the extraction process. The effects of dimension of monolithic cake and unit design on the extraction performance were optimized in detail. To demonstrate the usability of this new extraction approach, poly(vinylimidazole-divinylbenzene) was prepared and acted as the extractive cake. The analysis of steroid hormones in milk samples by the combination of stir cake with high-performance liquid chromatography with diode array detection, was selected as a paradigm for the practical evaluation of stir cake sorptive extraction. Under the optimized extraction conditions, low detection limits (S/N=3) and quantification limits (S/N=10) of the proposed method for the target analytes were achieved within the range between 0.33-0.69 and 1.08-2.28 μg/L, respectively. The method also showed good linearity, repeatability, high feasibility and acceptable recoveries. Because the monolithic cake does not contact with the vessel wall during stirring, there is no friction loss of extractive medium and the stir cake can be used for more than 1000 h.

  15. Estimating variable effective population sizes from multiple genomes: a sequentially markov conditional sampling distribution approach.

    PubMed

    Sheehan, Sara; Harris, Kelley; Song, Yun S

    2013-07-01

    Throughout history, the population size of modern humans has varied considerably due to changes in environment, culture, and technology. More accurate estimates of population size changes, and when they occurred, should provide a clearer picture of human colonization history and help remove confounding effects from natural selection inference. Demography influences the pattern of genetic variation in a population, and thus genomic data of multiple individuals sampled from one or more present-day populations contain valuable information about the past demographic history. Recently, Li and Durbin developed a coalescent-based hidden Markov model, called the pairwise sequentially Markovian coalescent (PSMC), for a pair of chromosomes (or one diploid individual) to estimate past population sizes. This is an efficient, useful approach, but its accuracy in the very recent past is hampered by the fact that, because of the small sample size, only few coalescence events occur in that period. Multiple genomes from the same population contain more information about the recent past, but are also more computationally challenging to study jointly in a coalescent framework. Here, we present a new coalescent-based method that can efficiently infer population size changes from multiple genomes, providing access to a new store of information about the recent past. Our work generalizes the recently developed sequentially Markov conditional sampling distribution framework, which provides an accurate approximation of the probability of observing a newly sampled haplotype given a set of previously sampled haplotypes. Simulation results demonstrate that we can accurately reconstruct the true population histories, with a significant improvement over the PSMC in the recent past. We apply our method, called diCal, to the genomes of multiple human individuals of European and African ancestry to obtain a detailed population size change history during recent times.

  16. Estimating Variable Effective Population Sizes from Multiple Genomes: A Sequentially Markov Conditional Sampling Distribution Approach

    PubMed Central

    Sheehan, Sara; Harris, Kelley; Song, Yun S.

    2013-01-01

    Throughout history, the population size of modern humans has varied considerably due to changes in environment, culture, and technology. More accurate estimates of population size changes, and when they occurred, should provide a clearer picture of human colonization history and help remove confounding effects from natural selection inference. Demography influences the pattern of genetic variation in a population, and thus genomic data of multiple individuals sampled from one or more present-day populations contain valuable information about the past demographic history. Recently, Li and Durbin developed a coalescent-based hidden Markov model, called the pairwise sequentially Markovian coalescent (PSMC), for a pair of chromosomes (or one diploid individual) to estimate past population sizes. This is an efficient, useful approach, but its accuracy in the very recent past is hampered by the fact that, because of the small sample size, only few coalescence events occur in that period. Multiple genomes from the same population contain more information about the recent past, but are also more computationally challenging to study jointly in a coalescent framework. Here, we present a new coalescent-based method that can efficiently infer population size changes from multiple genomes, providing access to a new store of information about the recent past. Our work generalizes the recently developed sequentially Markov conditional sampling distribution framework, which provides an accurate approximation of the probability of observing a newly sampled haplotype given a set of previously sampled haplotypes. Simulation results demonstrate that we can accurately reconstruct the true population histories, with a significant improvement over the PSMC in the recent past. We apply our method, called diCal, to the genomes of multiple human individuals of European and African ancestry to obtain a detailed population size change history during recent times. PMID:23608192

  17. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    SciTech Connect

    Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  18. State of the art of environmentally friendly sample preparation approaches for determination of PBDEs and metabolites in environmental and biological samples: A critical review.

    PubMed

    Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C

    2016-01-28

    Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed.

  19. Adaptive-optic approach to mitigating aero-optic disturbances for a forced shear layer

    NASA Astrophysics Data System (ADS)

    Nightingale, Alice M.

    Non-uniform, variable-density fields, resulting from compressibility effects in turbulent flows, are the source of aero-optical distortions which cause significant reductions in optical system performance. As a laser beam transverses through an optically active medium, containing index-of-refraction variations, several optical phenomena occur including beam wander, image distortion, and beam defocus. When encountering a variation in the index field, light waves refract causing an otherwise planar wavefront of a laser beam to become aberrated, contributing to the adverse effects mentioned above. Adaptive-Optics (AO) is a technique used to correct for such spatially and temporally varying aberrations on an optical beam by applying a conjugate waveform correction prior to the beams transmission through the flow. Conventional AO systems are bandwidth limited by real-time processing issues and wavefront sensor limitations. Therefore, an alternative to the conventional AO approach has been proposed, developed and evaluated with the goal of overcoming such bandwidth limitations. The alternative AO system, presented throughout this document, consists of two main features; feed-forward flow control and a phase-locked-loop AO control strategy. Initially irregular, unpredictable large-scale structures within a shear layer are regularized using flow control. Subsequently, the resulting optical wavefront, and corresponding optical signal, emerging from the regularized flow becomes more periodic and predictable effectively reducing the bandwidth necessary to make real-time corrections. A phase-lock-loop controller is then used to perform real-time corrections. Wavefront corrections are estimated based upon the regularized flow, while two small aperture laser beams provide a non-intrusive means of acquiring amplitude and phase error measurements. The phase-lock-loop controller uses these signals as feedback to synchronize the deformable mirror's waveform to that of the shear

  20. Flow Cell Sampling Technique: A new approach to analyze physical soil and particle surface properties of undisturbed soil samples

    NASA Astrophysics Data System (ADS)

    Krueger, Jiem; Leue, Martin; Heinze, Stefanie; Bachmann, Jörg

    2016-04-01

    During unsaturated water conditions, water flow occurs in the soil mainly by water film flow and depends on moisture content and pore surface properties. More attention is attributed to coatings enclosing soil particles and thus may affect wetting properties as well as hydraulic soil functions. Particle coatings are most likely responsible for many adsorption processes and are expected to favor local heterogeneous microstructure with enhanced biological activity. Many of the effects described cannot be detected on the basis of conventional soil column experiments, which were usually made to study soil hydraulic processes or surface - soil solution exchange processes. The general objective of this study was to develop a new field sampling method to unravel heterogeneous flow processes on small scales in an undisturbed soil under controlled lab conditions. This will be done by using modified flow cells (Plexiglas). Beside the measurements within a flow cell as breakthrough curves, the developed technique has several additional advantages in contrast to common columns or existing flow chamber/cell designs. The direct modification from the sampling frame to the flow cell provides the advantage to combine several analyses. The new technique enables to cut up to 5 thin undisturbed soil slices (quasi-replicates) down to 10 and/or 5 mm. Relative large particles, for instance, may limit this sampling method. The large observation area of up to 150 cm2 allows the characterization of particle surface properties in a high spatial resolution within an undisturbed soil sample. This sampling technique, as shown in our study, has the opportunity to link soil wetting hydraulic and several particle surface properties to spatial soil heterogeneities. This was shown with tracer experiments, small-scale contact angle measurements and analyses of the spatial distribution of functional groups of soil organic matter via DRIFT mapping.

  1. Semi-Supervised Approach to Phase Identification from Combinatorial Sample Diffraction Patterns

    NASA Astrophysics Data System (ADS)

    Bunn, Jonathan Kenneth; Hu, Jianjun; Hattrick-Simpers, Jason R.

    2016-08-01

    Manual attribution of crystallographic phases from high-throughput x-ray diffraction studies is an arduous task, and represents a rate-limiting step in high-throughput exploration of new materials. Here, we demonstrate a semi-supervised machine learning technique, SS-AutoPhase, which uses a two-step approach to identify automatically phases from diffraction data. First, clustering analysis is used to select a representative subset of samples automatically for human analysis. Second, an AdaBoost classifier uses the labeled samples to identify the presence of the different phases in diffraction data. SS-AutoPhase was used to identify the metallographic phases in 278 diffraction patterns from a FeGaPd composition spread sample. The accuracy of SS-AutoPhase was >82.6% for all phases when 15% of the diffraction patterns were used for training. The SS-AutoPhase predicted phase diagram showed excellent agreement with human expert analysis. Furthermore it was able to determine and identify correctly a previously unreported phase.

  2. Assessing skin blood flow dynamics in older adults using a modified sample entropy approach.

    PubMed

    Liao, Fuyuan; Jan, Yih-Kuen

    2014-01-01

    The aging process may result in attenuated microvascular reactivity in response to environmental stimuli, which can be evaluated by analyzing skin blood flow (SBF) signals. Among various methods for analyzing physiological signals, sample entropy (SE) is commonly used to quantify the degree of regularity of time series. However, we found that for temporally correlated data, SE value depends on the sampling rate. When data are oversampled, SE may give misleading results. To address this problem, we propose to modify the definition of SE by using time-lagged vectors in the calculation of the conditional probability that any two vectors of successive data points are within a tolerance r for m points remain within the tolerance at the next point. The lag could be chosen as the first minimum of the auto mutual information function. We tested the performance of modified SE using simulated signals and SBF data. The results showed that modified SE is able to quantify the degree of regularity of the signals regardless of sampling rate. Using this approach, we observed a more regular behavior of blood flow oscillations (BFO) during local heating-induced maximal vasodilation period compared to the baseline in young and older adults and a more regular behavior of BFO in older adults compared to young adults. These results suggest that modified SE may be useful in the study of SBF dynamics.

  3. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    PubMed

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions. PMID:26098967

  4. Semi-Supervised Approach to Phase Identification from Combinatorial Sample Diffraction Patterns

    NASA Astrophysics Data System (ADS)

    Bunn, Jonathan Kenneth; Hu, Jianjun; Hattrick-Simpers, Jason R.

    2016-07-01

    Manual attribution of crystallographic phases from high-throughput x-ray diffraction studies is an arduous task, and represents a rate-limiting step in high-throughput exploration of new materials. Here, we demonstrate a semi-supervised machine learning technique, SS-AutoPhase, which uses a two-step approach to identify automatically phases from diffraction data. First, clustering analysis is used to select a representative subset of samples automatically for human analysis. Second, an AdaBoost classifier uses the labeled samples to identify the presence of the different phases in diffraction data. SS-AutoPhase was used to identify the metallographic phases in 278 diffraction patterns from a FeGaPd composition spread sample. The accuracy of SS-AutoPhase was >82.6% for all phases when 15% of the diffraction patterns were used for training. The SS-AutoPhase predicted phase diagram showed excellent agreement with human expert analysis. Furthermore it was able to determine and identify correctly a previously unreported phase.

  5. Developing Coastal Adaptation to Climate Change in the New York City Infrastructure-Shed: Process, Approach, Tools, and Strategies

    NASA Technical Reports Server (NTRS)

    Rosenzweig, Cynthia; Solecki, William D.; Blake, Reginald; Bowman, Malcolm; Faris, Craig; Gornitz, Vivien; Horton, Radley; Jacob, Klaus; LeBlanc, Alice; Leichenko, Robin; Linkin, Megan; Major, David; O'Grady, Megan; Patrick, Lesley; Sussman, Edna; Yohe, Gary; Zimmerman, Rae

    2010-01-01

    While current rates of sea level rise and associated coastal flooding in the New York City region appear to be manageable by stakeholders responsible for communications, energy, transportation, and water infrastructure, projections for sea level rise and associated flooding in the future, especially those associated with rapid icemelt of the Greenland and West Antarctic Icesheets, may be beyond the range of current capacity because an extreme event might cause flooding and inundation beyond the planning and preparedness regimes. This paper describes the comprehensive process, approach, and tools developed by the New York City Panel on Climate Change (NPCC) in conjunction with the region s stakeholders who manage its critical infrastructure, much of which lies near the coast. It presents the adaptation approach and the sea-level rise and storm projections related to coastal risks developed through the stakeholder process. Climate change adaptation planning in New York City is characterized by a multi-jurisdictional stakeholder-scientist process, state-of-the-art scientific projections and mapping, and development of adaptation strategies based on a risk-management approach.

  6. Approaching sign language test construction: adaptation of the German sign language receptive skills test.

    PubMed

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization. PMID:21208998

  7. Adaptive cluster expansion approach for predicting the structure evolution of graphene oxide

    SciTech Connect

    Li, Xi-Bo; Guo, Pan; Wang, D.; Liu, Li-Min; Zhang, Yongsheng

    2014-12-14

    An adaptive cluster expansion (CE) method is used to explore surface adsorption and growth processes. Unlike the traditional CE method, suitable effective cluster interaction (ECI) parameters are determined, and then the selected fixed number of ECIs is continually optimized to predict the stable configurations with gradual increase of adatom coverage. Comparing with traditional CE method, the efficiency of the adaptive CE method could be greatly enhanced. As an application, the adsorption and growth of oxygen atoms on one side of pristine graphene was carefully investigated using this method in combination with first-principles calculations. The calculated results successfully uncover the structural evolution of graphene oxide for the different numbers of oxygen adatoms on graphene. The aggregation behavior of the stable configurations for different oxygen adatom coverages is revealed for increasing coverages of oxygen atoms. As a targeted method, adaptive CE can also be applied to understand the evolution of other surface adsorption and growth processes.

  8. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    SciTech Connect

    Peron, Stephanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  9. Approaching sign language test construction: adaptation of the German sign language receptive skills test.

    PubMed

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization.

  10. Evaluation of Online/Offline Image Guidance/Adaptation Approaches for Prostate Cancer Radiation Therapy

    SciTech Connect

    Qin, An; Sun, Ying; Liang, Jian; Yan, Di

    2015-04-01

    Purpose: To evaluate online/offline image-guided/adaptive treatment techniques for prostate cancer radiation therapy with daily cone-beam CT (CBCT) imaging. Methods and Materials: Three treatment techniques were evaluated retrospectively using daily pre- and posttreatment CBCT images on 22 prostate cancer patients. Prostate, seminal vesicles (SV), rectal wall, and bladder were delineated on all CBCT images. For each patient, a pretreatment intensity modulated radiation therapy plan with clinical target volume (CTV) = prostate + SV and planning target volume (PTV) = CTV + 3 mm was created. The 3 treatment techniques were as follows: (1) Daily Correction: The pretreatment intensity modulated radiation therapy plan was delivered after online CBCT imaging, and position correction; (2) Online Planning: Daily online inverse plans with 3-mm CTV-to-PTV margin were created using online CBCT images, and delivered; and (3) Hybrid Adaption: Daily Correction plus an offline adaptive inverse planning performed after the first week of treatment. The adaptive plan was delivered for all remaining 15 fractions. Treatment dose for each technique was constructed using the daily posttreatment CBCT images via deformable image registration. Evaluation was performed using treatment dose distribution in target and critical organs. Results: Treatment equivalent uniform dose (EUD) for the CTV was within [85.6%, 100.8%] of the pretreatment planned target EUD for Daily Correction; [98.7%, 103.0%] for Online Planning; and [99.2%, 103.4%] for Hybrid Adaptation. Eighteen percent of the 22 patients in Daily Correction had a target dose deficiency >5%. For rectal wall, the mean ± SD of the normalized EUD was 102.6% ± 2.7% for Daily Correction, 99.9% ± 2.5% for Online Planning, and 100.6% ± 2.1% for Hybrid Adaptation. The mean ± SD of the normalized bladder EUD was 108.7% ± 8.2% for Daily Correction, 92.7% ± 8.6% for Online Planning, and 89.4% ± 10.8% for Hybrid

  11. Modeling and control of nonlinear systems using novel fuzzy wavelet networks: The output adaptive control approach

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyyed Hossein; Noroozi, Navid; Safavi, Ali Akbar; Ebadat, Afrooz

    2011-09-01

    This paper proposes an observer based self-structuring robust adaptive fuzzy wave-net (FWN) controller for a class of nonlinear uncertain multi-input multi-output systems. The control signal is comprised of two parts. The first part arises from an adaptive fuzzy wave-net based controller that approximates the system structural uncertainties. The second part comes from a robust H∞ based controller that is used to attenuate the effect of function approximation error and disturbance. Moreover, a new self structuring algorithm is proposed to determine the location of basis functions. Simulation results are provided for a two DOF robot to show the effectiveness of the proposed method.

  12. An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Crooke, S. C.

    1970-01-01

    Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.

  13. A PCR-based approach to assess genomic DNA contamination in RNA: Application to rat RNA samples.

    PubMed

    Padhi, Bhaja K; Singh, Manjeet; Huang, Nicholas; Pelletier, Guillaume

    2016-02-01

    Genomic DNA (gDNA) contamination of RNA samples can lead to inaccurate measurement of gene expression by reverse transcription quantitative real-time PCR (RT-qPCR). We describe an easily adoptable PCR-based method where gDNA contamination in RNA samples is assessed by comparing the amplification of intronic and exonic sequences from a housekeeping gene. Although this alternative assay was developed for rat RNA samples, it could be easily adapted to other species. As a proof of concept, we assessed the effects of detectable gDNA contamination levels on the expression of a few genes that illustrate the importance of RNA quality in acquiring reliable data.

  14. Determination of avermectins: a QuEChERS approach to the analysis of food samples.

    PubMed

    Rúbies, A; Antkowiak, S; Granados, M; Companyó, R; Centrich, F

    2015-08-15

    We present a simple method for extracting avermectines from meat, based on a QuEChERS approach followed by liquid chromatography (LC) coupled to triple quadrupole (QqQ) tandem mass spectrometry (MS/MS). The compounds considered are ivermectin, abamectin, emamectin, eprinomectin, doramectin and moxidectin. The new method has been fully validated according to the requirements of European Decision 657/2002/CE (EU, 2002). The method is suitable for the analysis of avermectins at concentration as low as 2.5 μg kg(-1), and allows high sample throughput. In addition, the detection of avermectins by high resolution mass spectrometry using a quadrupole-Orbritrap (Q-Orbitrap) hybrid instrument has been explored, and the target Selected Ion Monitoring data dependent MS/MS (t-SIM-dd MS/MS) mode has been found to provide excellent performance for residue determination of target analytes.

  15. Determination of avermectins: a QuEChERS approach to the analysis of food samples.

    PubMed

    Rúbies, A; Antkowiak, S; Granados, M; Companyó, R; Centrich, F

    2015-08-15

    We present a simple method for extracting avermectines from meat, based on a QuEChERS approach followed by liquid chromatography (LC) coupled to triple quadrupole (QqQ) tandem mass spectrometry (MS/MS). The compounds considered are ivermectin, abamectin, emamectin, eprinomectin, doramectin and moxidectin. The new method has been fully validated according to the requirements of European Decision 657/2002/CE (EU, 2002). The method is suitable for the analysis of avermectins at concentration as low as 2.5 μg kg(-1), and allows high sample throughput. In addition, the detection of avermectins by high resolution mass spectrometry using a quadrupole-Orbritrap (Q-Orbitrap) hybrid instrument has been explored, and the target Selected Ion Monitoring data dependent MS/MS (t-SIM-dd MS/MS) mode has been found to provide excellent performance for residue determination of target analytes. PMID:25794721

  16. Direct detection of Theileria annulata in bovine blood samples using standard and isothermal DNA amplification approaches.

    PubMed

    Gomes, Jacinto; Inácio, João

    2015-01-01

    Tropical theileriosis is a tick-borne disease responsible for important health problems in cattle, caused by the hemoprotozoan Theileria annulata. Traditionally, detection of Theileria pathogens in infected animals requires the microscopic examination of stained-blood smears and serological methods. Molecular diagnostic assays have been developed for the detection of Theileria parasites, including PCR-based and reverse line blotting approaches, but these methods usually demand qualified personnel, complex instrumentation, and expensive materials. Loop-mediated isothermal amplification (LAMP) can facilitate the design of molecular assays independent of the use of sophisticated equipment. In this chapter we describe the application of two molecular assays for the direct detection of T. annulata in bovine blood samples, based in real-time PCR and LAMP, both targeting the Tams1-encoding gene of this parasite.

  17. Compact range reflector analysis using the plane wave spectrum approach with an adjustable sampling rate

    NASA Astrophysics Data System (ADS)

    McKay, James P.; Rahmat-Samii, Yahya

    1991-06-01

    An improved method for determining the test zone field of compact range reflectors is presented. The plane wave spectrum (PWS) approach is used to obtain the test zone field from knowledge of the reflector aperture field distribution. The method is particularly well suited to the analysis of reflectors with a linearly serrated rim for reduced edge diffraction. Computation of the PWS of the reflector aperture field is facilitated by a closed-form expression for the Fourier transform of a polygonal window function. Inverse transformation in the test zone region is accomplished using a fast Fourier transform (FFT) algorithm with a properly adjusted sampling rate (which is a function of both the reflector size and the distance from the reflector). The method is validated by comparison with results obtained using surface current and aperture field integration techniques. The performance of several serrated reflectors is evaluated in order to observe the effects of edge diffraction on the test zone fields.

  18. A Novel Quantitative Approach for Eliminating Sample-To-Sample Variation Using a Hue Saturation Value Analysis Program

    PubMed Central

    McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    Objectives As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Methods and Results Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. Conclusion We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation. PMID:24595280

  19. A Multiple Objective Test Assembly Approach for Exposure Control Problems in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.

    2010-01-01

    Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…

  20. Constructive, Self-Regulated, Situated, and Collaborative Learning: An Approach for the Acquisition of Adaptive Competence

    ERIC Educational Resources Information Center

    de Corte, Erik

    2012-01-01

    In today's learning society, education must focus on fostering adaptive competence (AC) defined as the ability to apply knowledge and skills flexibly in different contexts. In this article, four major types of learning are discussed--constructive, self-regulated, situated, and collaborative--in relation to what students must learn in order to…

  1. Values and Subjective Mental Health in America: A Social Adaptation Approach.

    ERIC Educational Resources Information Center

    Kahle, Lynn R.; And Others

    Although surveys of mental health involve some controversy, a significant relationship between values and mental health appears to exist. To study the adaption of individuals with alternative values to their psychological worlds, over 2,000 adults identified their most important values. Alcohol abuse, drug abuse, dizziness, anxiety, and general…

  2. An adaptive approach to computing the spectrum and mean frequency of Doppler signals.

    PubMed

    Herment, A; Giovannelli, J F

    1995-01-01

    Modern ultrasound Doppler systems are facing the problem of processing increasingly shorter data sets. Spectral analysis of the strongly nonstationary Doppler signal needs to shorten the analysis window while maintaining a low variance and high resolution spectrum. Color flow imaging requires estimation of the Doppler mean frequency from even shorter Doppler data sets to obtain both a high frame rate and high spatial resolution. We reconsider these two estimation problems in light of adaptive methods. A regularized parametric method for spectral analysis as well as an adapted mean frequency estimator are developed. The choice of the adaptive criterion is then addressed and adaptive spectral and mean frequency estimators are developed to minimize the mean square error on estimation in the presence of noise. Two suboptimal spectral and mean-frequency estimators are then derived for real-time applications. Finally, their performance is compared to that of both the FFT based periodogram and the AR parametric spectral analysis for the spectral estimator, and, to both the correlation angle and the Kristoffersen's [8] estimators for the mean frequency estimator using Doppler data recorded in vitro. PMID:7638930

  3. Difference, Adapted Physical Activity and Human Development: Potential Contribution of Capabilities Approach

    ERIC Educational Resources Information Center

    Silva, Carla Filomena; Howe, P. David

    2012-01-01

    This paper is a call to Adapted Physical Activity (APA) professionals to increase the reflexive nature of their practice. Drawing upon Foucault's concept of governmentality (1977) APA action may work against its own publicized goals of empowerment and self-determination. To highlight these inconsistencies, we will draw upon historical and social…

  4. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  5. Peers as Resources for Learning: A Situated Learning Approach to Adapted Physical Activity in Rehabilitation

    ERIC Educational Resources Information Center

    Standal, Oyvind F.; Jespersen, Ejgil

    2008-01-01

    The purpose of this study was to investigate the learning that takes place when people with disabilities interact in a rehabilitation context. Data were generated through in-depth interviews and close observations in a 2 one-half week-long rehabilitation program, where the participants learned both wheelchair skills and adapted physical…

  6. Who Needs Contingency Approaches and Guidelines in Order to Adapt Vague Management Ideas?

    ERIC Educational Resources Information Center

    Ortenblad, Anders

    2010-01-01

    The purpose of this conceptual paper is to question the assumption that the general idea of the learning organisation needs to be adapted to the specific context before it can be put into practical use. It is suggested that there are lots of ways to use management ideas, other than implementing them in the practice of organisations. It is further…

  7. Can Approaches to Research in Art and Design Be Beneficially Adapted for Research into Higher Education?

    ERIC Educational Resources Information Center

    Trowler, Paul

    2013-01-01

    This paper examines the research practices in Art and Design that are distinctively different from those common in research into higher education outside those fields. It considers whether and what benefit could be derived from their adaptation by the latter. The paper also examines the factors that are conducive and obstructive to adaptive…

  8. An Approach to Evaluating Adolescent Adaptive Processes: Validity of an Interview-Based Measure.

    ERIC Educational Resources Information Center

    Beardslee, William R.; And Others

    1986-01-01

    An initial exploration of the validity of 15 scales designed to assess adaptive ego processes in adolescence is presented. Diabetic youngsters, psychiatric patients, and high school students with no illness are compared using the scales. Correlations are found between the scales and a separate, conceptually related measure of ego development.…

  9. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  10. Testing Set-Point Theory in a Swiss National Sample: Reaction and Adaptation to Major Life Events.

    PubMed

    Anusic, Ivana; Yap, Stevie C Y; Lucas, Richard E

    2014-12-01

    Set-point theory posits that individuals react to the experience of major life events, but quickly adapt back to pre-event baseline levels of subjective well-being in the years following the event. A large, nationally representative panel study of Swiss households was used to examine set-point theory by investigating the extent of adaptation following the experience of marriage, childbirth, widowhood, unemployment, and disability. Our results demonstrate that major life events are associated with marked change in life satisfaction and, for some events (e.g., marriage, disability), these changes are relatively long lasting even when accounting for normative, age related change.

  11. Approaching Ultimate Intrinsic SNR in a Uniform Spherical Sample with Finite Arrays of Loop Coils

    PubMed Central

    Vaidya, Manushka V.; Sodickson, Daniel K.; Lattanzi, Riccardo

    2015-01-01

    We investigated to what degree and at what rate the ultimate intrinsic (UI) signal-to-noise ratio (SNR) may be approached using finite radiofrequency detector arrays. We used full-wave electromagnetic field simulations based on dyadic Green’s functions to compare the SNR of arrays of loops surrounding a uniform sphere with the ultimate intrinsic SNR (UISNR), for increasing numbers of elements over a range of magnetic field strengths, voxel positions, sphere sizes, and acceleration factors. We evaluated the effect of coil conductor losses and the performance of a variety of distinct geometrical arrangements such as “helmet” and “open-pole” configurations in multiple imaging planes. Our results indicate that UISNR at the center is rapidly approached with encircling arrays and performance is substantially lower near the surface, where a quadrature detection configuration tailored to voxel position is optimal. Coil noise is negligible at high field, where sample noise dominates. Central SNR for practical array configurations such as the helmet is similar to that of close-packed arrangements. The observed trends can provide physical insights to improve coil design. PMID:26097442

  12. A novel four-dimensional analytical approach for analysis of complex samples.

    PubMed

    Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J

    2016-05-01

    A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation. PMID:27038056

  13. Assessment of Sampling Approaches for Remote Sensing Image Classification in the Iranian Playa Margins

    NASA Astrophysics Data System (ADS)

    Kazem Alavipanah, Seyed

    There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing

  14. Virtual MEG Helmet: Computer Simulation of an Approach to Neuromagnetic Field Sampling.

    PubMed

    Medvedovsky, Mordekhay; Nenonen, Jukka; Koptelova, Alexandra; Butorina, Anna; Paetau, Ritva; Mäkelä, Jyrki P; Ahonen, Antti; Simola, Juha; Gazit, Tomer; Taulu, Samu

    2016-03-01

    Head movements during an MEG recording are commonly considered an obstacle. In this computer simulation study, we introduce an approach, the virtual MEG helmet (VMH), which employs the head movements for data quality improvement. With a VMH, a denser MEG helmet is constructed by adding new sensors corresponding to different head positions. Based on the Shannon's theory of communication, we calculated the total information as a figure of merit for comparing the actual 306-sensor Elekta Neuromag helmet to several types of the VMH. As source models, we used simulated randomly distributed source current (RDSC), simulated auditory and somatosensory evoked fields. Using the RDSC model with the simulation of 360 recorded events, the total information (bits/sample) was 989 for the most informative single head position and up to 1272 for the VMH (addition of 28.6%). Using simulated AEFs, the additional contribution of a VMH was 12.6% and using simulated SEF only 1.1%. For the distributed and bilateral sources, a VMH can provide a more informative sampling of the neuromagnetic field during the same recording time than measuring the MEG from one head position. VMH can, in some situations, improve source localization of the neuromagnetic fields related to the normal and pathological brain activity. This should be investigated further employing real MEG recordings.

  15. Identification with mainstream culture and preference for alternative alcohol treatment approaches in a community sample.

    PubMed

    Dillworth, Tiara M; Kaysen, Debra; Montoya, Heidi D; Larimer, Mary E

    2009-03-01

    Although various treatment approaches are available for alcohol problems, less than 25% of individuals with alcohol use disorders obtain treatment. The purpose of this study is to evaluate interest in attending alternative alcohol treatments, such as meditation and acupuncture, compared to Alcoholics Anonymous (AA). A community sample of 156 adult participants concerned about their drinking were recruited through flyers and newspaper advertisements to complete a Web-based survey assessing identification with mainstream culture, sexual identity, and likelihood to attend alternative alcohol treatments. Participants reported higher likelihood of attending alternative treatments as compared to AA, and lesbian, gay, and bisexual participants (28.2% of the sample) were more likely to attend alternative treatments than heterosexual participants. A series of regression analyses were conducted to test whether the relationship between sexual identity and likelihood to attend alternative treatments was mediated by identification with mainstream culture. Participants who were less strongly identified with mainstream culture, regardless of sexual identity, reported higher likelihood of attending alternative treatments. These findings highlight that, for certain subgroups of the population, alternative treatments for alcohol misuse are appealing and suggest the need for future research testing the efficacy of alternative treatments for alcohol problems.

  16. Matrix compatible solid phase microextraction coating, a greener approach to sample preparation in vegetable matrices.

    PubMed

    Naccarato, Attilio; Pawliszyn, Janusz

    2016-09-01

    This work proposes the novel PDMS/DVB/PDMS fiber as a greener strategy for analysis by direct immersion solid phase microextraction (SPME) in vegetables. SPME is an established sample preparation approach that has not yet been adequately explored for food analysis in direct immersion mode due to the limitations of the available commercial coatings. The robustness and endurance of this new coating were investigated by direct immersion extractions in raw blended vegetables without any further sample preparation steps. The PDMS/DVB/PDMS coating exhibited superior features related to the capability of the external PDMS layer to protect the commercial coating, and showed improvements in terms of extraction capability and in the cleanability of the coating surface. In addition to having contributed to the recognition of the superior features of this new fiber concept before commercialization, the outcomes of this work serve to confirm advancements in the matrix compatibility of the PDMS-modified fiber, and open new prospects for the development of greener high-throughput analytical methods in food analysis using solid phase microextraction in the near future.

  17. A new insert sample approach to paper spray mass spectrometry: a paper substrate with paraffin barriers.

    PubMed

    Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G

    2016-03-01

    The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples. PMID:26817814

  18. Sample multiplexing with cysteine-selective approaches: cysDML and cPILOT.

    PubMed

    Gu, Liqing; Evans, Adam R; Robinson, Renã A S

    2015-04-01

    Cysteine-selective proteomics approaches simplify complex protein mixtures and improve the chance of detecting low abundant proteins. It is possible that cysteinyl-peptide/protein enrichment methods could be coupled to isotopic labeling and isobaric tagging methods for quantitative proteomics analyses in as few as two or up to 10 samples, respectively. Here we present two novel cysteine-selective proteomics approaches: cysteine-selective dimethyl labeling (cysDML) and cysteine-selective combined precursor isotopic labeling and isobaric tagging (cPILOT). CysDML is a duplex precursor quantification technique that couples cysteinyl-peptide enrichment with on-resin stable-isotope dimethyl labeling. Cysteine-selective cPILOT is a novel 12-plex workflow based on cysteinyl-peptide enrichment, on-resin stable-isotope dimethyl labeling, and iodoTMT tagging on cysteine residues. To demonstrate the broad applicability of the approaches, we applied cysDML and cPILOT methods to liver tissues from an Alzheimer's disease (AD) mouse model and wild-type (WT) controls. From the cysDML experiments, an average of 850 proteins were identified and 594 were quantified, whereas from the cPILOT experiment, 330 and 151 proteins were identified and quantified, respectively. Overall, 2259 unique total proteins were detected from both cysDML and cPILOT experiments. There is tremendous overlap in the proteins identified and quantified between both experiments, and many proteins have AD/WT fold-change values that are within ~20% error. A total of 65 statistically significant proteins are differentially expressed in the liver proteome of AD mice relative to WT. The performance of cysDML and cPILOT are demonstrated and advantages and limitations of using multiple duplex experiments versus a single 12-plex experiment are highlighted.

  19. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2016-10-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  20. Sample Multiplexing with Cysteine-Selective Approaches: cysDML and cPILOT

    NASA Astrophysics Data System (ADS)

    Gu, Liqing; Evans, Adam R.; Robinson, Renã A. S.

    2015-04-01

    Cysteine-selective proteomics approaches simplify complex protein mixtures and improve the chance of detecting low abundant proteins. It is possible that cysteinyl-peptide/protein enrichment methods could be coupled to isotopic labeling and isobaric tagging methods for quantitative proteomics analyses in as few as two or up to 10 samples, respectively. Here we present two novel cysteine-selective proteomics approaches: cysteine-selective dimethyl labeling (cysDML) and cysteine-selective combined precursor isotopic labeling and isobaric tagging (cPILOT). CysDML is a duplex precursor quantification technique that couples cysteinyl-peptide enrichment with on-resin stable-isotope dimethyl labeling. Cysteine-selective cPILOT is a novel 12-plex workflow based on cysteinyl-peptide enrichment, on-resin stable-isotope dimethyl labeling, and iodoTMT tagging on cysteine residues. To demonstrate the broad applicability of the approaches, we applied cysDML and cPILOT methods to liver tissues from an Alzheimer's disease (AD) mouse model and wild-type (WT) controls. From the cysDML experiments, an average of 850 proteins were identified and 594 were quantified, whereas from the cPILOT experiment, 330 and 151 proteins were identified and quantified, respectively. Overall, 2259 unique total proteins were detected from both cysDML and cPILOT experiments. There is tremendous overlap in the proteins identified and quantified between both experiments, and many proteins have AD/WT fold-change values that are within ~20% error. A total of 65 statistically significant proteins are differentially expressed in the liver proteome of AD mice relative to WT. The performance of cysDML and cPILOT are demonstrated and advantages and limitations of using multiple duplex experiments versus a single 12-plex experiment are highlighted.