Science.gov

Sample records for adaptive sampling approach

  1. Bayesian approaches for adaptive spatial sampling : an example application.

    SciTech Connect

    Johnson, R. L.; LePoire, D.; Huttenga, A.; Quinn, J.

    2005-05-25

    BAASS (Bayesian Approaches for Adaptive Spatial Sampling) is a set of computational routines developed to support the design and deployment of spatial sampling programs for delineating contamination footprints, such as those that might result from the accidental or intentional environmental release of radionuclides. BAASS presumes the existence of real-time measurement technologies that provide information quickly enough to affect the progress of data collection. This technical memorandum describes the application of BAASS to a simple example, compares the performance of a BAASS-based program with that of a traditional gridded program, and explores the significance of several of the underlying assumptions required by BAASS. These assumptions include the range of spatial autocorrelation present, the value of prior information, the confidence level required for decision making, and ''inside-out'' versus ''outside-in'' sampling strategies. In the context of the example, adaptive sampling combined with prior information significantly reduced the number of samples required to delineate the contamination footprint.

  2. Sample entropy-based adaptive wavelet de-noising approach for meteorologic and hydrologic time series

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Singh, Vijay P.; Shang, Xiaosan; Ding, Hao; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Wang, Shicheng; Wang, Zhenlong

    2014-07-01

    De-noising meteorologic and hydrologic time series is important to improve the accuracy and reliability of extraction, analysis, simulation, and forecasting. A hybrid approach, combining sample entropy and wavelet de-noising method, is developed to separate noise from original series and is named as AWDA-SE (adaptive wavelet de-noising approach using sample entropy). The AWDA-SE approach adaptively determines the threshold for wavelet analysis. Two kinds of meteorologic and hydrologic data sets, synthetic data set and 3 representative field measured data sets (one is the annual rainfall data of Jinan station and the other two are annual streamflow series from two typical stations in China, Yingluoxia station on the Heihe River, which is little affected by human activities, and Lijin station on the Yellow River, which is greatly affected by human activities), are used to illustrate the approach. The AWDA-SE approach is compared with three conventional de-noising methods, including fixed-form threshold algorithm, Stein unbiased risk estimation algorithm, and minimax algorithm. Results show that the AWDA-SE approach separates effectively the signal and noise of the data sets and is found to be better than the conventional methods. Measures of assessment standards show that the developed approach can be employed to investigate noisy and short time series and can also be applied to other areas.

  3. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    SciTech Connect

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    2015-02-25

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a sample of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching

  4. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach.

    PubMed

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2015-10-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on "one-time" release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods.

  5. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  6. Adaptive sampling for noisy problems

    SciTech Connect

    Cantu-Paz, E

    2004-03-26

    The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.

  7. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  8. Adaptive robust image registration approach based on adequately sampling polar transform and weighted angular projection function

    NASA Astrophysics Data System (ADS)

    Wei, Zhao; Tao, Feng; Jun, Wang

    2013-10-01

    An efficient, robust, and accurate approach is developed for image registration, which is especially suitable for large-scale change and arbitrary rotation. It is named the adequately sampling polar transform and weighted angular projection function (ASPT-WAPF). The proposed ASPT model overcomes the oversampling problem of conventional log-polar transform. Additionally, the WAPF presented as the feature descriptor is robust to the alteration in the fovea area of an image, and reduces the computational cost of the following registration process. The experimental results show two major advantages of the proposed method. First, it can register images with high accuracy even when the scale factor is up to 10 and the rotation angle is arbitrary. However, the maximum scaling estimated by the state-of-the-art algorithms is 6. Second, our algorithm is more robust to the size of the sampling region while not decreasing the accuracy of the registration.

  9. Adaptive Sampling in Hierarchical Simulation

    SciTech Connect

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  10. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  11. Adaptive sampling in convergent beams.

    PubMed

    Espinosa, Julián; Mas, David; Pérez, Jorge; Illueca, Carlos

    2008-09-01

    Numerical calculation of convergent Fresnel patterns through fast Fourier transform usually requires a large number of samples to fulfill the Nyquist sampling condition around the focus. From polynomial decomposition of the wavefront it is possible to determine which polynomial orders are the main contributors to the number of samples. This information can be used to properly modify the initial wavefront and relax the Nyquist condition thus giving a more efficient numerical algorithm.

  12. Adaptive spatial sampling of contaminated soil

    SciTech Connect

    Cox, L.A. Jr.

    1999-12-01

    Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations o chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probably values of unknown soil concentration distributions during each simulation run.

  13. Adaptive approaches to biosecurity governance.

    PubMed

    Cook, David C; Liu, Shuang; Murphy, Brendan; Lonsdale, W Mark

    2010-09-01

    This article discusses institutional changes that may facilitate an adaptive approach to biosecurity risk management where governance is viewed as a multidisciplinary, interactive experiment acknowledging uncertainty. Using the principles of adaptive governance, evolved from institutional theory, we explore how the concepts of lateral information flows, incentive alignment, and policy experimentation might shape Australia's invasive species defense mechanisms. We suggest design principles for biosecurity policies emphasizing overlapping complementary response capabilities and the sharing of invasive species risks via a polycentric system of governance.

  14. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  15. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  16. Adaptive Sampling for High Throughput Data Using Similarity Measures

    SciTech Connect

    Bulaevskaya, V.; Sales, A. P.

    2015-05-06

    The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.

  17. Adaptive down-sampling video coding

    NASA Astrophysics Data System (ADS)

    Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi

    2010-01-01

    Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.

  18. Stepwise two-stage sample size adaptation.

    PubMed

    Wan, Hong; Ellenberg, Susan S; Anderson, Keaven M

    2015-01-15

    Several adaptive design methods have been proposed to reestimate sample size using the observed treatment effect after an initial stage of a clinical trial while preserving the overall type I error at the time of the final analysis. One unfortunate property of the algorithms used in some methods is that they can be inverted to reveal the exact treatment effect at the interim analysis. We propose using a step function with an inverted U-shape of observed treatment difference for sample size reestimation to lessen the information on treatment effect revealed. This will be referred to as stepwise two-stage sample size adaptation. This method applies calculation methods used for group sequential designs. We minimize expected sample size among a class of these designs and compare efficiency with the fully optimized two-stage design, optimal two-stage group sequential design, and designs based on promising conditional power. The trade-off between efficiency versus the improved blinding of the interim treatment effect will be discussed.

  19. Feature Adaptive Sampling for Scanning Electron Microscopy

    PubMed Central

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  20. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    Typical estimates of standing wood derived from remote sensing sources take advantage of aggregate measurements of canopy heights (e.g. LIDAR) and canopy diameters (segmentation of IKONOS imagery) to obtain a wood volume estimate by assuming homogeneous species and a fixed function that returns volume. The validation of such techniques use manually measured diameter at breast height records (DBH). Our goal is to improve the accuracy and applicability of biomass estimation methods to heterogeneous forests and transitional areas. We are developing estimates with quantifiable uncertainty using a new form of estimation function, active sampling, and volumetric reconstruction image rendering for species specific mass truth. Initially we are developing a Bayesian adaptive sampling method for BRDF associated with the MISR Rahman model with respect to categorical biomes. This involves characterizing the probability distributions of the 3 free parameters of the Rahman model for the 6 categories of biomes used by MISR. Subsequently, these distributions can be used to determine the optimal sampling methodology to distinguish biomes during acquisition. We have a remotely controlled semi-autonomous helicopter that has stereo imaging, lidar, differential GPS, and spectrometers covering wavelengths from visible to NIR. We intend to automatically vary the way points of the flight path via the Bayesian adaptive sampling method. The second critical part of this work is in automating the validation of biomass estimates via using machine vision techniques. This involves taking 2-D pictures of trees of known species, and then via Bayesian techniques, reconstructing 3-D models of the trees to estimate the distribution moments associated with wood volume. Similar techniques have been developed by the medical imaging community. This then provides probability distributions conditional upon species. The final part of this work is in relating the BRDF actively sampled measurements to species

  1. HASE - The Helsinki adaptive sample preparation line

    NASA Astrophysics Data System (ADS)

    Palonen, V.; Pesonen, A.; Herranen, T.; Tikkanen, P.; Oinonen, M.

    2013-01-01

    We have designed and built an adaptive sample preparation line with separate modules for combustion, molecular sieve handling, CO2 gas cleaning, CO2 storage, and graphitization. The line is also connected to an elemental analyzer. Operation of the vacuum equipment, a flow controller, pressure sensors, ovens, and graphitization reactors are automated with a reliable NI-cRIO real-time system. Stepped combustion can be performed in two ovens at temperatures up to 900 °C. Depending on the application, CuO or O2-flow combustion can be used. A flow controller is used to adjust the O2 flow and pressure during combustion. For environmental samples, a module for molecular sieve regeneration and sample desorption is attached to the line replacing the combustion module. In the storage module, CO2 samples can be stored behind a gas-tight diaphragm valve and either stored for later graphitization or taken for measurements with separate equipment (AMS gas ion source or a separate mass spectrometer). The graphitization module consists of four automated reactors, capable of graphitizing samples with masses from 3 mg down to 50 μg.

  2. Phobos Sample Return: Next Approach

    NASA Astrophysics Data System (ADS)

    Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George

    The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A

  3. Acquiring case adaptation knowledge: A hybrid approach

    SciTech Connect

    Leake, D.B.; Kinley, A.; Wilson, D.

    1996-12-31

    The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance.

  4. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2016-02-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.

  5. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  6. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  7. Adaptive sampling for learning gaussian processes using mobile sensor networks.

    PubMed

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme.

  8. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  9. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  10. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGES

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  11. Distributed Database Kriging for Adaptive Sampling (D2 KAS)

    NASA Astrophysics Data System (ADS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-07-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.

  12. Distributed database kriging for adaptive sampling (D²KAS)

    SciTech Connect

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.

  13. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  14. Chaotic satellite attitude control by adaptive approach

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Wang, Jing; Zuo, Min; Liu, Zaiwen; Du, Junping

    2014-06-01

    In this article, chaos control of satellite attitude motion is considered. Adaptive control based on dynamic compensation is utilised to suppress the chaotic behaviour. Control approaches with three control inputs and with only one control input are proposed. Since the adaptive control employed is based on dynamic compensation, faithful model of the system is of no necessity. Sinusoidal disturbance and parameter uncertainties are considered to evaluate the robustness of the closed-loop system. Both of the approaches are confirmed by theoretical and numerical results.

  15. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  16. Adaptive Metropolis Sampling with Product Distributions

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  17. A modular approach to adaptive structures.

    PubMed

    Pagitz, Markus; Pagitz, Manuel; Hühne, Christian

    2014-10-07

    A remarkable property of nastic, shape changing plants is their complete fusion between actuators and structure. This is achieved by combining a large number of cells whose geometry, internal pressures and material properties are optimized for a given set of target shapes and stiffness requirements. An advantage of such a fusion is that cell walls are prestressed by cell pressures which increases, decreases the overall structural stiffness, weight. Inspired by the nastic movement of plants, Pagitz et al (2012 Bioinspir. Biomim. 7) published a novel concept for pressure actuated cellular structures. This article extends previous work by introducing a modular approach to adaptive structures. An algorithm that breaks down any continuous target shapes into a small number of standardized modules is presented. Furthermore it is shown how cytoskeletons within each cell enhance the properties of adaptive modules. An adaptive passenger seat and an aircrafts leading, trailing edge is used to demonstrate the potential of a modular approach.

  18. Cross-Cultural Adaptation: Current Approaches.

    ERIC Educational Resources Information Center

    Kim, Young Yun, Ed.; Gudykunst, William B., Ed.

    1988-01-01

    Reflecting multidisciplinary and multisocietal approaches, this collection presents 14 theoretical or research-based essays dealing with cross-cultural adaptation of individuals who are born and raised in one culture and find themselves in need of modifying their customary life patterns in a foreign culture. Papers in the collection are:…

  19. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  20. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  1. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  2. A dynamic programming approach to adaptive fractionation

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, Jagdish; Craft, David; Bortfeld, Thomas; Tsitsiklis, John N.

    2012-03-01

    We conduct a theoretical study of various solution methods for the adaptive fractionation problem. The two messages of this paper are as follows: (i) dynamic programming (DP) is a useful framework for adaptive radiation therapy, particularly adaptive fractionation, because it allows us to assess how close to optimal different methods are, and (ii) heuristic methods proposed in this paper are near-optimal, and therefore, can be used to evaluate the best possible benefit of using an adaptive fraction size. The essence of adaptive fractionation is to increase the fraction size when the tumor and organ-at-risk (OAR) are far apart (a ‘favorable’ anatomy) and to decrease the fraction size when they are close together. Given that a fixed prescribed dose must be delivered to the tumor over the course of the treatment, such an approach results in a lower cumulative dose to the OAR when compared to that resulting from standard fractionation. We first establish a benchmark by using the DP algorithm to solve the problem exactly. In this case, we characterize the structure of an optimal policy, which provides guidance for our choice of heuristics. We develop two intuitive, numerically near-optimal heuristic policies, which could be used for more complex, high-dimensional problems. Furthermore, one of the heuristics requires only a statistic of the motion probability distribution, making it a reasonable method for use in a realistic setting. Numerically, we find that the amount of decrease in dose to the OAR can vary significantly (5-85%) depending on the amount of motion in the anatomy, the number of fractions and the range of fraction sizes allowed. In general, the decrease in dose to the OAR is more pronounced when: (i) we have a high probability of large tumor-OAR distances, (ii) we use many fractions (as in a hyper-fractionated setting) and (iii) we allow large daily fraction size deviations.

  3. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-08-31

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.

  4. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks

    PubMed Central

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

  5. Airport Characterization for the Adaptation of Surface Congestion Management Approaches

    DTIC Science & Technology

    2013-02-01

    1 of 2 Airport Characterization for the Adaptation of Surface Congestion Management Approaches Melanie Sandberg, Tom Reynolds...TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Airport Characterization for the Adaptation of Surface Congestion Management...1 Airport Characterization for the Adaptation of Surface Congestion Management Approaches* Melanie

  6. Local Adaptation in European Firs Assessed through Extensive Sampling across Altitudinal Gradients in Southern Europe

    PubMed Central

    Postolache, Dragos; Lascoux, Martin; Drouzas, Andreas D.; Källman, Thomas; Leonarduzzi, Cristina; Liepelt, Sascha; Piotti, Andrea; Popescu, Flaviu; Roschanski, Anna M.; Zhelev, Peter; Fady, Bruno; Vendramin, Giovanni Giuseppe

    2016-01-01

    Background Local adaptation is a key driver of phenotypic and genetic divergence at loci responsible for adaptive traits variations in forest tree populations. Its experimental assessment requires rigorous sampling strategies such as those involving population pairs replicated across broad spatial scales. Methods A hierarchical Bayesian model of selection (HBM) that explicitly considers both the replication of the environmental contrast and the hierarchical genetic structure among replicated study sites is introduced. Its power was assessed through simulations and compared to classical ‘within-site’ approaches (FDIST, BAYESCAN) and a simplified, within-site, version of the model introduced here (SBM). Results HBM demonstrates that hierarchical approaches are very powerful to detect replicated patterns of adaptive divergence with low false-discovery (FDR) and false-non-discovery (FNR) rates compared to the analysis of different sites separately through within-site approaches. The hypothesis of local adaptation to altitude was further addressed by analyzing replicated Abies alba population pairs (low and high elevations) across the species’ southern distribution range, where the effects of climatic selection are expected to be the strongest. For comparison, a single population pair from the closely related species A. cephalonica was also analyzed. The hierarchical model did not detect any pattern of adaptive divergence to altitude replicated in the different study sites. Instead, idiosyncratic patterns of local adaptation among sites were detected by within-site approaches. Conclusion Hierarchical approaches may miss idiosyncratic patterns of adaptation among sites, and we strongly recommend the use of both hierarchical (multi-site) and classical (within-site) approaches when addressing the question of adaptation across broad spatial scales. PMID:27392065

  7. Node Based Adaptive Sampling and Advanced AUV Capabilities

    DTIC Science & Technology

    2001-09-30

    is to develop and refine node based adaptive sampling and hovering technology using FAU Morpheus vehicle as a test platform. The former one is a...included two days of testing with a “dummy” vehicle followed by two days of testing with the real Morpheus . The initial tests were done with the dummy...vehicle because the Morpheus was unavailable for docking experiments at the time. These tests were conducted in order to get a better sense of

  8. Node Based Adaptive Sampling and Advanced AUV Capabilities

    DTIC Science & Technology

    2002-09-30

    is to develop and refine node based adaptive sampling and hovering technology using FAU Morpheus vehicle as a test platform. The former one is a...dummy” vehicle followed by two days of testing with the real Morpheus . The initial tests were done with the dummy vehicle because the Morpheus was... Morpheus when it became available. The dummy vehicle was constructed from empty Morpheus modules with weight placed inside each at a calculated

  9. Approaching neuropsychological tasks through adaptive neurorobots

    NASA Astrophysics Data System (ADS)

    Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio

    2015-04-01

    Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.

  10. Adaptive sampling strategy support for the unlined chromic acid pit, chemical waste landfill, Sandia National Laboratories, Albuquerque, New Mexico

    SciTech Connect

    Johnson, R.L.

    1993-11-01

    Adaptive sampling programs offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the real-time data generated by an adaptive sampling program. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system (SitePlanner{trademark} ) for data fusion, management, and display and combined Bayesian/geostatistical methods (PLUME) for contamination-extent estimation and sample location selection. This approach is applied in a retrospective study of a subsurface chromium plume at Sandia National Laboratories` chemical waste landfill. Retrospective analyses suggest the potential for characterization cost savings on the order of 60% through a reduction in the number of sampling programs, total number of soil boreholes, and number of samples analyzed from each borehole.

  11. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  12. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  13. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  14. Anomalous human behavior detection: an adaptive approach

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Coen; Halma, Arvid; Schutte, Klamer

    2013-05-01

    Detection of anomalies (outliers or abnormal instances) is an important element in a range of applications such as fault, fraud, suspicious behavior detection and knowledge discovery. In this article we propose a new method for anomaly detection and performed tested its ability to detect anomalous behavior in videos from DARPA's Mind's Eye program, containing a variety of human activities. In this semi-unsupervised task a set of normal instances is provided for training, after which unknown abnormal behavior has to be detected in a test set. The features extracted from the video data have high dimensionality, are sparse and inhomogeneously distributed in the feature space making it a challenging task. Given these characteristics a distance-based method is preferred, but choosing a threshold to classify instances as (ab)normal is non-trivial. Our novel aproach, the Adaptive Outlier Distance (AOD) is able to detect outliers in these conditions based on local distance ratios. The underlying assumption is that the local maximum distance between labeled examples is a good indicator of the variation in that neighborhood, and therefore a local threshold will result in more robust outlier detection. We compare our method to existing state-of-art methods such as the Local Outlier Factor (LOF) and the Local Distance-based Outlier Factor (LDOF). The results of the experiments show that our novel approach improves the quality of the anomaly detection.

  15. Improving Wang-Landau sampling with adaptive windows.

    PubMed

    Cunha-Netto, A G; Caparica, A A; Tsai, Shan-Ho; Dickman, Ronald; Landau, D P

    2008-11-01

    Wang-Landau sampling (WLS) of large systems requires dividing the energy range into "windows" and joining the results of simulations in each window. The resulting density of states (and associated thermodynamic functions) is shown to suffer from boundary effects in simulations of lattice polymers and the five-state Potts model. Here, we implement WLS using adaptive windows. Instead of defining fixed energy windows (or windows in the energy-magnetization plane for the Potts model), the boundary positions depend on the set of energy values on which the histogram is flat at a given stage of the simulation. Shifting the windows each time the modification factor f is reduced, we eliminate border effects that arise in simulations using fixed windows. Adaptive windows extend significantly the range of system sizes that may be studied reliably using WLS.

  16. Improving Wang-Landau sampling with adaptive windows

    NASA Astrophysics Data System (ADS)

    Cunha-Netto, A. G.; Caparica, A. A.; Tsai, Shan-Ho; Dickman, Ronald; Landau, D. P.

    2008-11-01

    Wang-Landau sampling (WLS) of large systems requires dividing the energy range into “windows” and joining the results of simulations in each window. The resulting density of states (and associated thermodynamic functions) is shown to suffer from boundary effects in simulations of lattice polymers and the five-state Potts model. Here, we implement WLS using adaptive windows. Instead of defining fixed energy windows (or windows in the energy-magnetization plane for the Potts model), the boundary positions depend on the set of energy values on which the histogram is flat at a given stage of the simulation. Shifting the windows each time the modification factor f is reduced, we eliminate border effects that arise in simulations using fixed windows. Adaptive windows extend significantly the range of system sizes that may be studied reliably using WLS.

  17. The Limits to Adaptation: A Systems Approach

    NASA Astrophysics Data System (ADS)

    Felgenhauer, T. N.

    2013-12-01

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed through market prices), and societal preferences (from prices as well as cultural norms). Exceedance of adaptation capacity will require substitution either with other pre-existing policy responses or with new adaptation responses that have yet to be developed and tested. Previous modeling research shows that capacity limited adaptation will play a policy-significant role in future climate change decision-making. The aim of this study is to describe different types of adaptation response and climate damage systems and postulate how these systems might behave when the limits to adaptation are reached. The hypothesis is that this behavior will be governed by the characteristics and level of the adaptation limit, the shape of the damage curve in that specific damage area, and the availability of alternative adaptation responses once the threshold is passed, whether it is more of the old technology, a new response type, or a transformation of the climate damage and response system itself.

  18. Approach-Induced Biases in Human Information Sampling

    PubMed Central

    Hunt, Laurence T.; Rutledge, Robb B.; Malalasekera, W. M. Nishantha; Kennerley, Steven W.; Dolan, Raymond J.

    2016-01-01

    Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action. PMID:27832071

  19. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  20. Effect of imperfect detectability on adaptive and conventional sampling: Simulated sampling of freshwater mussels in the upper Mississippi River

    USGS Publications Warehouse

    Smith, D.R.; Gray, B.R.; Newton, T.J.; Nichols, D.

    2010-01-01

    Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.

  1. Effect of imperfect detectability on adaptive and conventional sampling: simulated sampling of freshwater mussels in the upper Mississippi River.

    PubMed

    Smith, David R; Gray, Brian R; Newton, Teresa J; Nichols, Doug

    2010-11-01

    Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.

  2. Adaptive single replica multiple state transition interface sampling

    NASA Astrophysics Data System (ADS)

    Du, Wei-Na; Bolhuis, Peter G.

    2013-07-01

    The multiple state transition path sampling method allows sampling of rare transitions between many metastable states, but has the drawback that switching between qualitatively different pathways is difficult. Combination with replica exchange transition interface sampling can in principle alleviate this problem, but requires a large number of simultaneous replicas. Here we remove these drawbacks by introducing a single replica sampling algorithm that samples only one interface at a time, while efficiently walking through the entire path space using a Wang-Landau approach or, alternatively, a fixed bias. We illustrate the method on several model systems: a particle diffusing in a simple 2D potential, isomerization in a small Lennard Jones cluster, and isomerization of the alanine dipeptide in explicit water.

  3. Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling

    NASA Technical Reports Server (NTRS)

    Grace, Joseph M.; Verseux, Cyprien; Gentry, Diana; Moffet, Amy; Thayabaran, Ramanen; Wong, Nathan; Rothschild, Lynn

    2013-01-01

    The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of bacteria to the presence of a toxic metal, automatically adjusting the level of toxicity based on the

  4. Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling

    NASA Astrophysics Data System (ADS)

    Grace, J. M.; Verseux, C.; Gentry, D.; Moffet, A.; Thayabaran, R.; Wong, N.; Rothschild, L.

    2013-12-01

    The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates[Wielgoss et al., 2013]. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques[Wassmann et al., 2010]. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols[Alcántara-Díaz et al., 2004; Goldman and Travisano, 2011]. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of

  5. Assessment of the behavioral inhibition system and the behavioral approach system: adaptation and validation of the Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ) in a Chilean sample.

    PubMed

    Dufey, Michele; Fernández, Ana María; Mourgues, Catalina

    2011-05-01

    The goal of the present study is to estimate the psychometric properties of the Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ; Torrubia, Avila, Moltó, & Caseras, 2001) in a sample of Chilean college students. The main hypothesis is that the instrument would show appropriate levels of reliability and validity, in light of previous validation studies. A pilot study was conducted in order to generate the adapted version of the questionnaire, which was then applied to a student sample from different undergraduate careers (n = 434). The results show the expected levels of reliability (test-retest and internal consistency). The factorial validity does not comply with the expected model, suggesting a further consideration of the structure of the questionnaire. External validity is appropriate, as the questionnaire shows the expected correlations with other personality measures. It is concluded that the SPSRQ is adequate for the context of validation, and this study contributes to the generalization of the questionnaire, since the results are consistent with the expected psychometric properties that have been reported in the literature.

  6. Implementation of time-efficient adaptive sampling function design for improved undersampled MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeok; Kim, Hyeonjin

    2016-12-01

    To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.

  7. Sample-adaptive-prediction for HEVC SCC intra coding with ridge estimation from spatially neighboring samples

    NASA Astrophysics Data System (ADS)

    Kang, Je-Won; Ryu, Soo-Kyung

    2017-02-01

    In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.

  8. Temporally adaptive sampling: a case study in rare species survey design with marbled salamanders (Ambystoma opacum).

    PubMed

    Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds.

  9. Temporally Adaptive Sampling: A Case Study in Rare Species Survey Design with Marbled Salamanders (Ambystoma opacum)

    PubMed Central

    Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  10. Using continuous in-situ measurements to adaptively trigger urban storm water samples

    NASA Astrophysics Data System (ADS)

    Wong, B. P.; Kerkez, B.

    2015-12-01

    Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.

  11. POF-Darts: Geometric adaptive sampling for probability of failure

    SciTech Connect

    Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.

    2016-06-18

    We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink, improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.

  12. POF-Darts: Geometric adaptive sampling for probability of failure

    DOE PAGES

    Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...

    2016-06-18

    We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less

  13. Structured estimation - Sample size reduction for adaptive pattern classification

    NASA Technical Reports Server (NTRS)

    Morgera, S.; Cooper, D. B.

    1977-01-01

    The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.

  14. Passive and active adaptive management: approaches and an example.

    PubMed

    Williams, Byron K

    2011-05-01

    Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted.

  15. Passive and active adaptive management: Approaches and an example

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.

  16. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  17. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  18. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  19. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  20. Responsiveness-to-Intervention: A "Systems" Approach to Instructional Adaptation

    ERIC Educational Resources Information Center

    Fuchs, Douglas; Fuchs, Lynn S.

    2016-01-01

    Classroom research on adaptive teaching indicates few teachers modify instruction for at-risk students in a manner that benefits them. Responsiveness-To-Intervention, with its tiers of increasingly intensive instruction, represents an alternative approach to adaptive instruction that may prove more workable in today's schools.

  1. Concept Based Approach for Adaptive Personalized Course Learning System

    ERIC Educational Resources Information Center

    Salahli, Mehmet Ali; Özdemir, Muzaffer; Yasar, Cumali

    2013-01-01

    One of the most important factors for improving the personalization aspects of learning systems is to enable adaptive properties to them. The aim of the adaptive personalized learning system is to offer the most appropriate learning path and learning materials to learners by taking into account their profiles. In this paper, a new approach to…

  2. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate.

  3. Adapting to the Digital Age: A Narrative Approach

    ERIC Educational Resources Information Center

    Cousins, Sarah; Bissar, Dounia

    2012-01-01

    The article adopts a narrative inquiry approach to foreground informal learning and exposes a collection of stories from tutors about how they adapted comfortably to the digital age.We were concerned that despite substantial evidence that bringing about changes in pedagogic practices can be difficult, there is a gap in convincing approaches to…

  4. Sampling of Complex Networks: A Datamining Approach

    NASA Astrophysics Data System (ADS)

    Loecher, Markus; Dohrmann, Jakob; Bauer, Gernot

    2007-03-01

    Efficient and accurate sampling of big complex networks is still an unsolved problem. As the degree distribution is one of the most commonly used attributes to characterize a network, there have been many attempts in recent papers to derive the original degree distribution from the data obtained during a traceroute- like sampling process. This talk describes a strategy for predicting the original degree of a node using the data obtained from a network by traceroute-like sampling making use of datamining techniques. Only local quantities (the sampled degree k, the redundancy of node detection r, the time of the first discovery of a node t and the distance to the sampling source d) are used as input for the datamining models. Global properties like the betweenness centrality are ignored. These local quantities are examined theoretically and in simulations to increase their value for the predictions. The accuracy of the models is discussed as a function of the number of sources used in the sampling process and the underlying topology of the network. The purpose of this work is to introduce the techniques of the relatively young field of datamining to the discussion on network sampling.

  5. Adaptive pulse width control and sampling for low power pulse oximetry.

    PubMed

    Gubbi, Sagar Venkatesh; Amrutur, Bharadwaj

    2015-04-01

    Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry.

  6. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-02

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made.

  7. Approach for environmental baseline water sampling

    USGS Publications Warehouse

    Smith, K.S.

    2011-01-01

    Samples collected during the exploration phase of mining represent baseline conditions at the site. As such, they can be very important in forecasting potential environmental impacts should mining proceed, and can become measurements against which future changes are compared. Constituents in stream water draining mined and mineralized areas tend to be geochemically, spatially, and temporally variable, which presents challenges in collecting both exploration and baseline water-quality samples. Because short-term (daily) variations can complicate long-term trends, it is important to consider recent findings concerning geochemical variability of stream-water constituents at short-term timescales in designing sampling plans. Also, adequate water-quality information is key to forecasting potential ecological impacts from mining. Therefore, it is useful to collect baseline water samples adequate tor geochemical and toxicological modeling. This requires complete chemical analyses of dissolved constituents that include major and minor chemical elements as well as physicochemical properties (including pH, specific conductance, dissolved oxygen) and dissolved organic carbon. Applying chemical-equilibrium and appropriate toxicological models to water-quality information leads to an understanding of the speciation, transport, sequestration, bioavailability, and aquatic toxicity of potential contaminants. Insights gained from geochemical and toxicological modeling of water-quality data can be used to design appropriate mitigation and for economic planning for future mining activities.

  8. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  9. A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2008-01-01

    This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…

  10. Hardware friendly adaptive support-weight approach for stereo matching

    NASA Astrophysics Data System (ADS)

    Hou, Zuoxun; Han, Pei; Zhang, Hongwei; An, Ran

    2016-10-01

    In this paper, the hardware friendly adaptive support-weight approach is proposed to simplify the weight calculation process of the standard approach, which employs the support region to simplify the calculation of the similarity and uses the fixed distance dependent weight to present the proximity. In addition, the complete stereo matching algorithm and the hardware structure for FPGA implementation compatible with the approach is proposed. The experimental results show that the algorithm produces the disparity map accurately in different illumination conditions and different scenes, and its processing average bad pixel rate is only 6.65% for the standard test images of the Middlebury database, which is approximate to the performance of the standard adaptive support-weight approach. The proposed hardware structure provides a basis for design and implementation of real-time accurate stereo matching FPGA system.

  11. Motion-adapted pulse sequences for oriented sample (OS) solid-state NMR of biopolymers.

    PubMed

    Lu, George J; Opella, Stanley J

    2013-08-28

    One of the main applications of solid-state NMR is to study the structure and dynamics of biopolymers, such as membrane proteins, under physiological conditions where the polypeptides undergo global motions as they do in biological membranes. The effects of NMR radiofrequency irradiations on nuclear spins are strongly influenced by these motions. For example, we previously showed that the MSHOT-Pi4 pulse sequence yields spectra with resonance line widths about half of those observed using the conventional pulse sequence when applied to membrane proteins undergoing rapid uniaxial rotational diffusion in phospholipid bilayers. In contrast, the line widths were not changed in microcrystalline samples where the molecules did not undergo global motions. Here, we demonstrate experimentally and describe analytically how some Hamiltonian terms are susceptible to sample motions, and it is their removal through the critical π/2 Z-rotational symmetry that confers the "motion adapted" property to the MSHOT-Pi4 pulse sequence. This leads to the design of separated local field pulse sequence "Motion-adapted SAMPI4" and is generalized to an approach for the design of decoupling sequences whose performance is superior in the presence of molecular motions. It works by cancelling the spin interaction by explicitly averaging the reduced Wigner matrix to zero, rather than utilizing the 2π nutation to average spin interactions. This approach is applicable to both stationary and magic angle spinning solid-state NMR experiments.

  12. Searching for adaptive traits in genetic resources - phenology based approach

    NASA Astrophysics Data System (ADS)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  13. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  14. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  15. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  16. Adaptive Modeling: An Approach for Incorporating Nonlinearity in Regression Analyses.

    PubMed

    Knafl, George J; Barakat, Lamia P; Hanlon, Alexandra L; Hardie, Thomas; Knafl, Kathleen A; Li, Yimei; Deatrick, Janet A

    2017-02-01

    Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.© 2017 Wiley Periodicals, Inc.

  17. Variable sampling approach to mitigate instability in networked control systems with delays.

    PubMed

    López-Echeverría, Daniel; Magaña, Mario E

    2012-01-01

    This paper analyzes a new alternative approach to compensate for the effects of time delays on a dynamic networked control system (NCS). The approach is based on the use of time-delay-predicted values as the sampling times of the NCS. We use a one-step-ahead prediction algorithm based on an adaptive time delay neural network. The application of pole placement and linear quadratic regulator methods to compute the feedback gains taking into account the estimated time delays is investigated.

  18. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  19. An Approach to V&V of Embedded Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth

    2004-01-01

    Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,

  20. Adaptive Web Sampling - ArcPad Applet Manual

    DTIC Science & Technology

    2011-09-01

    10 3.1.1 Create and GPS-enable the GIS sampling design geodatabase ............................... 10...38 Appendix D: Geodatabase Design .......................................................................................................... 40...hence, it requires ESRI ArcGIS. It functions as a navigational GPS data collection and decision support tool to guide a user through the process of

  1. A ``Limited First Sample'' Approach to Mars Sample Return — Lessons from the Apollo Program

    NASA Astrophysics Data System (ADS)

    Eppler, D. B.; Draper, D.; Gruener, J.

    2012-06-01

    Complex, multi-opportunity Mars sample return approaches have failed to be selected as a new start twice since 1985. We advocate adopting a simpler strategy of "grab-and-go" for the initial sample return, similar to the approach taken on Apollo 11.

  2. A System Approach to Adaptive Multi-Modal Sensor Designs

    DTIC Science & Technology

    2010-02-01

    Email: rhody@cis.rit.edu Program Managers: Dr. Douglas Cochran <douglas.cochran@afosr.af.mil> Dr. Kitt C. Reinhardt <kitt.reinhardt...DEPARTMENT OF COMPUTER SCIENCE CONVENT AVE & 138TH ST SCHOOL OF ENGINEERING NEW YORK, NY 10031 Approved for public release...FA9550-08-1-0199 A System Approach to Adaptive Multi-Modal Sensor Designs 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  3. Efficient estimation of abundance for patchily distributed populations via two-phase, adaptive sampling.

    USGS Publications Warehouse

    Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.

    2008-01-01

    Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of

  4. The adaptive, cut-cell Cartesian approach (warts and all)

    NASA Astrophysics Data System (ADS)

    Powell, Kenneth G.

    1995-10-01

    Solution-adaptive methods based on cutting bodies out of Cartesian grids are gaining popularity now that the ways of circumventing the accuracy problems associated with small cut cells have been developed. Researchers are applying Cartesian-based schemes to a broad class of problems now, and, although there is still development work to be done, it is becoming clearer which problems are best suited to the approach (and which are not). The purpose of this paper is to give a candid assessment, based on applying Cartesian schemes to a variety of problems, of the strengths and weaknesses of the approach as it is currently implemented.

  5. The adaptive, cut-cell Cartesian approach (warts and all)

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.

    1995-01-01

    Solution-adaptive methods based on cutting bodies out of Cartesian grids are gaining popularity now that the ways of circumventing the accuracy problems associated with small cut cells have been developed. Researchers are applying Cartesian-based schemes to a broad class of problems now, and, although there is still development work to be done, it is becoming clearer which problems are best suited to the approach (and which are not). The purpose of this paper is to give a candid assessment, based on applying Cartesian schemes to a variety of problems, of the strengths and weaknesses of the approach as it is currently implemented.

  6. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops

    PubMed Central

    Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver

    2015-01-01

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  7. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-01-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  8. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Astrophysics Data System (ADS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-05-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coarser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  9. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Astrophysics Data System (ADS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-05-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  10. Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials

    PubMed Central

    Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin

    2016-01-01

    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139

  11. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  12. Adaptive molecular resolution approach in Hamiltonian form: An asymptotic analysis.

    PubMed

    Zhu, Jinglong; Klein, Rupert; Delle Site, Luigi

    2016-10-01

    Adaptive molecular resolution approaches in molecular dynamics are becoming relevant tools for the analysis of molecular liquids characterized by the interplay of different physical scales. The essential difference among these methods is in the way the change of molecular resolution is made in a buffer (transition) region. In particular a central question concerns the possibility of the existence of a global Hamiltonian which, by describing the change of resolution, is at the same time physically consistent, mathematically well defined, and numerically accurate. In this paper we present an asymptotic analysis of the adaptive process complemented by numerical results and show that under certain mathematical conditions a Hamiltonian, which is physically consistent and numerically accurate, may exist. Such conditions show that molecular simulations in the current computational implementation require systems of large size, and thus a Hamiltonian approach such as the one proposed, at this stage, would not be practical from the numerical point of view. However, the Hamiltonian proposed provides the basis for a simplification and generalization of the numerical implementation of adaptive resolution algorithms to other molecular dynamics codes.

  13. Adaptive approach for nonlinear sensitivity analysis of reaction kinetics.

    PubMed

    Horenko, Illia; Lorenz, Sönke; Schütte, Christof; Huisinga, Wilhelm

    2005-07-15

    We present a unified approach for linear and nonlinear sensitivity analysis for models of reaction kinetics that are stated in terms of systems of ordinary differential equations (ODEs). The approach is based on the reformulation of the ODE problem as a density transport problem described by a Fokker-Planck equation. The resulting multidimensional partial differential equation is herein solved by extending the TRAIL algorithm originally introduced by Horenko and Weiser in the context of molecular dynamics (J. Comp. Chem. 2003, 24, 1921) and discussed it in comparison with Monte Carlo techniques. The extended TRAIL approach is fully adaptive and easily allows to study the influence of nonlinear dynamical effects. We illustrate the scheme in application to an enzyme-substrate model problem for sensitivity analysis w.r.t. to initial concentrations and parameter values.

  14. Variable Neural Adaptive Robust Control: A Switched System Approach

    SciTech Connect

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.

  15. Lead Tap Sampling Approaches: What Do They Tell You

    EPA Science Inventory

    There is no single, universally applicable sampling approach for lead in drinking water. The appropriate type of sampling is dictated by the question being asked. There is no reason when a customer asks to have their home water tested to see if it's "safe" that they s...

  16. Adaptive free energy sampling in multidimensional collective variable space using boxed molecular dynamics.

    PubMed

    O'Connor, Mike; Paci, Emanuele; McIntosh-Smith, Simon; Glowacki, David R

    2016-12-22

    The past decade has seen the development of a new class of rare event methods in which molecular configuration space is divided into a set of boundaries/interfaces, and then short trajectories are run between boundaries. For all these methods, an important concern is how to generate boundaries. In this paper, we outline an algorithm for adaptively generating boundaries along a free energy surface in multi-dimensional collective variable (CV) space, building on the boxed molecular dynamics (BXD) rare event algorithm. BXD is a simple technique for accelerating the simulation of rare events and free energy sampling which has proven useful for calculating kinetics and free energy profiles in reactive and non-reactive molecular dynamics (MD) simulations across a range of systems, in both NVT and NVE ensembles. Two key developments outlined in this paper make it possible to automate BXD, and to adaptively map free energy and kinetics in complex systems. First, we have generalized BXD to multidimensional CV space. Using strategies from rigid-body dynamics, we have derived a simple and general velocity-reflection procedure that conserves energy for arbitrary collective variable definitions in multiple dimensions, and show that it is straightforward to apply BXD to sampling in multidimensional CV space so long as the Cartesian gradients ∇CV are available. Second, we have modified BXD to undertake on-the-fly statistical analysis during a trajectory, harnessing the information content latent in the dynamics to automatically determine boundary locations. Such automation not only makes BXD considerably easier to use; it also guarantees optimal boundaries, speeding up convergence. We have tested the multidimensional adaptive BXD procedure by calculating the potential of mean force for a chemical reaction recently investigated using both experimental and computational approaches - i.e., F + CD3CN → DF + D2CN in both the gas phase and a strongly coupled explicit CD3CN solvent

  17. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal

  18. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  19. Sample vial inserts: A better approach for sampling heterogeneous slurry samples in the SRS Defense Waste Processing Facility

    SciTech Connect

    Coleman, C.J.; Goode, S.R.

    1996-05-01

    A convenient and effective new approach for analyzing DWPF samples involves the use of inserts with volumes of 1.5--3 ml placed in the neck of 14 ml sample vials. The inserts have rims that conform to the rim of the vials so that they sit straight and stable in the vial. The DWPF tank sampling system fills the pre-weighed insert rather than the entire vial, so the vial functions only as the insert holder. The shielded cell operator then removes the vial cap and decants the insert containing the sample into a plastic bottle, crucible, etc., for analysis. Inert materials such as Teflon, plastic, and zirconium are used for the insert so it is unnecessary to separate the insert from the sample for most analyses. The key technique advantage of using inserts to take DWPF samples versus filling sample vials is that it provides a convenient and almost foolproof way of obtaining and handling small volumes of slurry samples in a shielded cell without corrupting the sample. Since the insert allows the entire sample to be analyzed, this approach eliminates the errors inherent with subsampling heterogeneous slurries that comprise DWPF samples. Slurry samples can then be analyzed with confidence. Analysis times are dramatically reduced by eliminating the drying and vitrification steps normally used to produce a homogeneous solid sample. Direct dissolution and elemental analysis of slurry samples are achieved in 8 hours or less compared with 40 hours for analysis of vitrified slurry samples. Comparison of samples taken in inserts versus full vials indicate that the insert does not significantly affect sample composition.

  20. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  1. Block-adaptive quantum mechanics: an adaptive divide-and-conquer approach to interactive quantum chemistry.

    PubMed

    Bosson, Maël; Grudinin, Sergei; Redon, Stephane

    2013-03-05

    We present a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. Although quantum chemistry models are known to be computationally demanding, we achieve interactive rates by focusing computational resources on the most active parts of the system. BAQM is based on a divide-and-conquer technique and constrains some nucleus positions and some electronic degrees of freedom on the fly to simplify the simulation. As a result, each time step may be performed significantly faster, which in turn may accelerate attraction to the neighboring local minima. By applying our approach to the nonself-consistent Atom Superposition and Electron Delocalization Molecular Orbital theory, we demonstrate interactive rates and efficient virtual prototyping for systems containing more than a thousand of atoms on a standard desktop computer.

  2. Estimating the abundance of clustered animal population by using adaptive cluster sampling and negative binomial distribution

    NASA Astrophysics Data System (ADS)

    Bo, Yizhou; Shifa, Naima

    2013-09-01

    An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.

  3. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  4. Adaptive intelligent systems for pHealth - an architectural approach.

    PubMed

    González, Carolina; Blobel, Bernd; López, Diego M

    2012-01-01

    Health systems around the globe, especially in developing countries, are facing the challenge of delivering effective, safe, and high quality public health and individualized health services independent of time and location, and with minimum of allocated resources (pHealth). In this context, health promotion and health education services are very important, especially in primary care settings. The objective of this paper is to describe the architecture of an adaptive intelligent system mainly developed to support education and training of citizens, but also of health professionals. The proposed architecture describes a system consisting of several agents that cooperatively interact to find and process tutoring materials to disseminate them to users (multi-agent system). A prototype is being implemented which includes medical students from the Medical Faculty at University of Cauca (Colombia). In the experimental process, the student´s learning style - detected with the Bayesian Model - is compared against the learning style obtained from a questioner (manual approach).

  5. Adaptive Neuro-fuzzy approach in friction identification

    NASA Astrophysics Data System (ADS)

    Zaiyad Muda @ Ismail, Muhammad

    2016-05-01

    Friction is known to affect the performance of motion control system, especially in terms of its accuracy. Therefore, a number of techniques or methods have been explored and implemented to alleviate the effects of friction. In this project, the Artificial Intelligent (AI) approach is used to model the friction which will be then used to compensate the friction. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is chosen among several other AI methods because of its reliability and capabilities of solving complex computation. ANFIS is a hybrid AI-paradigm that combines the best features of neural network and fuzzy logic. This AI method (ANFIS) is effective for nonlinear system identification and compensation and thus, being used in this project.

  6. An Approach to Automated Fusion System Design and Adaptation

    PubMed Central

    Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker

    2017-01-01

    Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum. PMID:28300762

  7. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  8. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method

    SciTech Connect

    Schnöller, Johannes Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-15

    Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)

  9. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    PubMed

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme.

  10. Kaolin Quality Prediction from Samples: A Bayesian Network Approach

    SciTech Connect

    Rivas, T.; Taboada, J.; Ordonez, C.; Matias, J. M.

    2009-08-13

    We describe the results of an expert system applied to the evaluation of samples of kaolin for industrial use in paper or ceramic manufacture. Different machine learning techniques - classification trees, support vector machines and Bayesian networks - were applied with the aim of evaluating and comparing their interpretability and prediction capacities. The predictive capacity of these models for the samples analyzed was highly satisfactory, both for ceramic quality and paper quality. However, Bayesian networks generally proved to be the most useful technique for our study, as this approach combines good predictive capacity with excellent interpretability of the kaolin quality structure, as it graphically represents relationships between variables and facilitates what-if analyses.

  11. Career Adapt-Abilities Scale in a French-Speaking Swiss Sample: Psychometric Properties and Relationships to Personality and Work Engagement

    ERIC Educational Resources Information Center

    Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre

    2012-01-01

    The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…

  12. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  13. Adaptive spatial filtering for off-axis digital holographic microscopy based on region recognition approach with iterative thresholding

    NASA Astrophysics Data System (ADS)

    He, Xuefei; Nguyen, Chuong Vinh; Pratap, Mrinalini; Zheng, Yujie; Wang, Yi; Nisbet, David R.; Rug, Melanie; Maier, Alexander G.; Lee, Woei Ming

    2016-12-01

    Here we propose a region-recognition approach with iterative thresholding, which is adaptively tailored to extract the appropriate region or shape of spatial frequency. In order to justify the method, we tested it with different samples and imaging conditions (different objectives). We demonstrate that our method provides a useful method for rapid imaging of cellular dynamics in microfluidic and cell cultures.

  14. Novel Approaches for Fungal Transcriptomics from Host Samples

    PubMed Central

    Amorim-Vaz, Sara; Sanglard, Dominique

    2016-01-01

    Candida albicans adaptation to the host requires a profound reprogramming of the fungal transcriptome as compared to in vitro laboratory conditions. A detailed knowledge of the C. albicans transcriptome during the infection process is necessary in order to understand which of the fungal genes are important for host adaptation. Such genes could be thought of as potential targets for antifungal therapy. The acquisition of the C. albicans transcriptome is, however, technically challenging due to the low proportion of fungal RNA in host tissues. Two emerging technologies were used recently to circumvent this problem. One consists of the detection of low abundance fungal RNA using capture and reporter gene probes which is followed by emission and quantification of resulting fluorescent signals (nanoString). The other is based first on the capture of fungal RNA by short biotinylated oligonucleotide baits covering the C. albicans ORFome permitting fungal RNA purification. Next, the enriched fungal RNA is amplified and subjected to RNA sequencing (RNA-seq). Here we detail these two transcriptome approaches and discuss their advantages and limitations and future perspectives in microbial transcriptomics from host material. PMID:26834721

  15. Novel Approaches for Fungal Transcriptomics from Host Samples.

    PubMed

    Amorim-Vaz, Sara; Sanglard, Dominique

    2015-01-01

    Candida albicans adaptation to the host requires a profound reprogramming of the fungal transcriptome as compared to in vitro laboratory conditions. A detailed knowledge of the C. albicans transcriptome during the infection process is necessary in order to understand which of the fungal genes are important for host adaptation. Such genes could be thought of as potential targets for antifungal therapy. The acquisition of the C. albicans transcriptome is, however, technically challenging due to the low proportion of fungal RNA in host tissues. Two emerging technologies were used recently to circumvent this problem. One consists of the detection of low abundance fungal RNA using capture and reporter gene probes which is followed by emission and quantification of resulting fluorescent signals (nanoString). The other is based first on the capture of fungal RNA by short biotinylated oligonucleotide baits covering the C. albicans ORFome permitting fungal RNA purification. Next, the enriched fungal RNA is amplified and subjected to RNA sequencing (RNA-seq). Here we detail these two transcriptome approaches and discuss their advantages and limitations and future perspectives in microbial transcriptomics from host material.

  16. Instantaneous GNSS attitude determination: A Monte Carlo sampling approach

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-04-01

    A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.

  17. Dynamically optimized Wang-Landau sampling with adaptive trial moves and modification factors.

    PubMed

    Koh, Yang Wei; Lee, Hwee Kuan; Okabe, Yutaka

    2013-11-01

    The density of states of continuous models is known to span many orders of magnitudes at different energies due to the small volume of phase space near the ground state. Consequently, the traditional Wang-Landau sampling which uses the same trial move for all energies faces difficulties sampling the low-entropic states. We developed an adaptive variant of the Wang-Landau algorithm that very effectively samples the density of states of continuous models across the entire energy range. By extending the acceptance ratio method of Bouzida, Kumar, and Swendsen such that the step size of the trial move and acceptance rate are adapted in an energy-dependent fashion, the random walker efficiently adapts its sampling according to the local phase space structure. The Wang-Landau modification factor is also made energy dependent in accordance with the step size, enhancing the accumulation of the density of states. Numerical simulations show that our proposed method performs much better than the traditional Wang-Landau sampling.

  18. Region and edge-adaptive sampling and boundary completion for segmentation

    SciTech Connect

    Dillard, Scott E; Prasad, Lakshman; Grazzini, Jacopo A

    2010-01-01

    Edge detection produces a set of points that are likely to lie on discontinuities between objects within an image. We consider faces of the Gabriel graph of these points, a sub-graph of the Delaunay triangulation. Features are extracted by merging these faces using size, shape and color cues. We measure regional properties of faces using a novel shape-dependant sampling method that overcomes undesirable sampling bias of the Delaunay triangles. Instead, sampling is biased so as to smooth regional statistics within the detected object boundaries, and this smoothing adapts to local geometric features of the shape such as curvature, thickness and straightness.

  19. A Functional Approach To Uncover the Low-Temperature Adaptation Strategies of the Archaeon Methanosarcina barkeri

    PubMed Central

    McCay, Paul; Fuszard, Matthew; Botting, Catherine H.; Abram, Florence; O'Flaherty, Vincent

    2013-01-01

    Low-temperature anaerobic digestion (LTAD) technology is underpinned by a diverse microbial community. The methanogenic archaea represent a key functional group in these consortia, undertaking CO2 reduction as well as acetate and methylated C1 metabolism with subsequent biogas (40 to 60% CH4 and 30 to 50% CO2) formation. However, the cold adaptation strategies, which allow methanogens to function efficiently in LTAD, remain unclear. Here, a pure-culture proteomic approach was employed to study the functional characteristics of Methanosarcina barkeri (optimum growth temperature, 37°C), which has been detected in LTAD bioreactors. Two experimental approaches were undertaken. The first approach aimed to characterize a low-temperature shock response (LTSR) of M. barkeri DSMZ 800T grown at 37°C with a temperature drop to 15°C, while the second experimental approach aimed to examine the low-temperature adaptation strategies (LTAS) of the same strain when it was grown at 15°C. The latter experiment employed cell viability and growth measurements (optical density at 600 nm [OD600]), which directly compared M. barkeri cells grown at 15°C with those grown at 37°C. During the LTSR experiment, a total of 127 proteins were detected in 37°C and 15°C samples, with 20 proteins differentially expressed with respect to temperature, while in the LTAS experiment 39% of proteins identified were differentially expressed between phases of growth. Functional categories included methanogenesis, cellular information processing, and chaperones. By applying a polyphasic approach (proteomics and growth studies), insights into the low-temperature adaptation capacity of this mesophilically characterized methanogen were obtained which suggest that the metabolically diverse Methanosarcinaceae could be functionally relevant for LTAD systems. PMID:23645201

  20. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  1. Mars sample return, updated to a groundbreaking approach

    NASA Technical Reports Server (NTRS)

    Mattingly, R.; Matovsek, S.; Jordan, F.

    2002-01-01

    A Mars Sample Return (MSR) mission is a goal of the Mars Program. Recently, NASA and JPL have been studying the possibility of a Mars Sample Return some time in the next decade of Mars exploration. In 2001, JPL commissioned four industry teams to make a fresh examination of MSR architectures. Six papers on these studies were presented at last year's conference. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that these MSR concepts, which included mobility and subsurface sample acquisition, did not fit reasonably within a balanced program. Therefore, at the request of NASA and the science community, JPL asked the four industry teams plus JPL's Team X to explore ways to reduce the cost of a MSR. A NASA-created MSR Science Steering Group (SSG) established a reduced set of requirements for these new studies that built upon the previous year's work. As a result, a new 'Groundbreaking' approach to MSR was established that is well understood based on the studies and independent cost assessments by Aerospace Corporation and SAIC. The Groundbreaking approach appears to be what a contemporary, balanced Mars Exploration Program can afford, has turned out to be justifiable by the MSR Science Steering Group, and has been endorsed by the Mars science community at large. This paper gives a brief overview of the original 2001 study results and discusses the process leading to the new studies, the studies themselves, and the results.

  2. New approaches to nanoparticle sample fabrication for atom probe tomography.

    PubMed

    Felfer, P; Li, T; Eder, K; Galinski, H; Magyar, A P; Bell, D C; Smith, G D W; Kruse, N; Ringer, S P; Cairney, J M

    2015-12-01

    Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10-20 nm core-shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ± 1 nm.

  3. Self-adaptive sampling rate data acquisition in JET's correlation reflectometer

    SciTech Connect

    Arcas, G. de; Lopez, J. M.; Ruiz, M.; Barrera, E.; Vega, J.; Fonseca, A.

    2008-10-15

    Data acquisition systems with self-adaptive sampling rate capabilities have been proposed as a solution to reduce the shear amount of data collected in every discharge of present fusion devices. This paper discusses the design of such a system for its use in the KG8B correlation reflectometer at JET. The system, which is based on the ITMS platform, continuously adapts the sample rate during the acquisition depending on the signal bandwidth. Data are acquired continuously at the expected maximum sample rate and transferred to a memory buffer in the host processor. Thereafter the rest of the process is based on software. Data are read from the memory buffer in blocks and for each block an intelligent decimation algorithm is applied. The decimation algorithm determines the signal bandwidth for each block in order to choose the optimum sample rate for that block, and from there the decimation factor to be used. Memory buffers are used to adapt the throughput of the three main software modules (data acquisition, processing, and storage) following a typical producer-consumer architecture. The system optimizes the amount of data collected while maintaining the same information. Design issues are discussed and results of performance evaluation are presented.

  4. Determination of conformational free energies of peptides by multidimensional adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Gu, Yan; Liu, Haiyan

    2006-09-01

    We improve the multidimensional adaptive umbrella sampling method for the computation of conformational free energies of biomolecules. The conformational transition between the α-helical and β-hairpin conformational states of an alanine decapeptide is used as an example. Convergence properties of the weighted-histogram-analysis-based adaptive umbrella sampling can be improved by using multiple replicas in each adaptive iteration and by using adaptive updating of the bounds of the umbrella potential. Using positional root-mean-square deviations from structures of the α-helical and β-hairpin reference states as reaction coordinates, we obtained well-converged free energy surfaces of both the in-vacuum and in-solution decapeptide systems. From the free energy surfaces well-converged relative free energies between the two conformational states can be derived. Advantages and disadvantages of different methods for obtaining conformational free energies as well as implications of our results in studying conformational transitions of proteins and in improving force field are discussed.

  5. Taking a broad approach to public health program adaptation: adapting a family-based diabetes education program.

    PubMed

    Reinschmidt, Kerstin M; Teufel-Shone, Nicolette I; Bradford, Gail; Drummond, Rebecca L; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K

    2010-04-01

    Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American communities at the U.S.-Mexico Border, we demonstrate how interventions are adapted at the intersection of multiple cultural contexts: the populations targeted, the community- and university-based entities designing and implementing interventions, and the field team delivering the materials. Program adaptation involves negotiations between representatives of all contexts and is imperative in promoting local ownership and program sustainability.

  6. Mapping the genomic architecture of adaptive traits with interspecific introgressive origin: a coalescent-based approach.

    PubMed

    Hejase, Hussein A; Liu, Kevin J

    2016-01-11

    Recent studies of eukaryotes including human and Neandertal, mice, and butterflies have highlighted the major role that interspecific introgression has played in adaptive trait evolution. A common question arises in each case: what is the genomic architecture of the introgressed traits? One common approach that can be used to address this question is association mapping, which looks for genotypic markers that have significant statistical association with a trait. It is well understood that sample relatedness can be a confounding factor in association mapping studies if not properly accounted for. Introgression and other evolutionary processes (e.g., incomplete lineage sorting) typically introduce variation among local genealogies, which can also differ from global sample structure measured across all genomic loci. In contrast, state-of-the-art association mapping methods assume fixed sample relatedness across the genome, which can lead to spurious inference. We therefore propose a new association mapping method called Coal-Map, which uses coalescent-based models to capture local genealogical variation alongside global sample structure. Using simulated and empirical data reflecting a range of evolutionary scenarios, we compare the performance of Coal-Map against EIGENSTRAT, a leading association mapping method in terms of its popularity, power, and type I error control. Our empirical data makes use of hundreds of mouse genomes for which adaptive interspecific introgression has recently been described. We found that Coal-Map's performance is comparable or better than EIGENSTRAT in terms of statistical power and false positive rate. Coal-Map's performance advantage was greatest on model conditions that most closely resembled empirically observed scenarios of adaptive introgression. These conditions had: (1) causal SNPs contained in one or a few introgressed genomic loci and (2) varying rates of gene flow - from high rates to very low rates where incomplete lineage

  7. Approach for Using Learner Satisfaction to Evaluate the Learning Adaptation Policy

    ERIC Educational Resources Information Center

    Jeghal, Adil; Oughdir, Lahcen; Tairi, Hamid; Radouane, Abdelhay

    2016-01-01

    The learning adaptation is a very important phase in a learning situation in human learning environments. This paper presents the authors' approach used to evaluate the effectiveness of learning adaptive systems. This approach is based on the analysis of learner satisfaction notices collected by a questionnaire on a learning situation; to analyze…

  8. Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.

    PubMed

    Vallejo-Medina, Pablo; Sierra, Juan Carlos

    2015-04-21

    The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1.

  9. Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.

    PubMed

    Piastra, Marco

    2013-05-01

    Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise.

  10. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  11. An integrated sampling and analysis approach for improved biodiversity monitoring

    USGS Publications Warehouse

    DeWan, Amielle A.; Zipkin, Elise

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  12. An Integrated Sampling and Analysis Approach for Improved Biodiversity Monitoring

    NASA Astrophysics Data System (ADS)

    Dewan, Amielle A.; Zipkin, Elise F.

    2010-05-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  13. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  14. Monte Carlo path sampling approach to modeling aeolian sediment transport

    NASA Astrophysics Data System (ADS)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

  15. An adaptive sampling algorithm for Doppler-shift fluorescence velocimetry in high-speed flows

    NASA Astrophysics Data System (ADS)

    Le Page, Laurent M.; O'Byrne, Sean

    2017-03-01

    We present an approach to improving the efficiency of obtaining samples over a given domain for the peak location of Gaussian line-shapes. The method uses parameter estimates obtained from previous measurements to determine subsequent sampling locations. The method may be applied to determine the location of a spectral peak, where the monetary or time cost is too high to allow a less efficient search method, such as sampling at uniformly distributed domain locations, to be used. We demonstrate the algorithm using linear least-squares fitting of log-scaled planar laser-induced fluorescence data combined with Monte-Carlo simulation of measurements, to accurately determine the Doppler-shifted fluorescence peak frequency for each pixel of a fluorescence image. A simulated comparison between this approach and a uniformly spaced sampling approach is carried out using fits both for a single pixel and for a collection of pixels representing the fluorescence images that would be obtained in a hypersonic flow facility. In all cases, the peak location of Doppler-shifted line-shapes were determined to a similar precision with fewer samples than could be achieved using the more typical uniformly distributed sampling approach.

  16. The Portuguese adaptation of the Gudjonsson Suggestibility Scale (GSS1) in a sample of inmates.

    PubMed

    Pires, Rute; Silva, Danilo R; Ferreira, Ana Sousa

    2014-01-01

    This paper comprises two studies which address the validity of the Portuguese adaptation of the Gudjonsson Suggestibility Scale, GSS1. In study 1, the means and standard deviations for the suggestibility results of a sample of Portuguese inmates (N=40, Mage=37.5 years, SD=8.1) were compared to those of a sample of Icelandic inmates (Gudjonsson, 1997; Gudjonsson & Sigurdsson, 1996). Portuguese inmates' results were in line with the original results. In study 2, the means and standard deviations for the suggestibility results of the sample of Portuguese inmates were compared to those of a general Portuguese population sample (N=57, Mage=36.1 years, SD=12.7). The forensic sample obtained significantly higher scores in suggestibility measures than the general population sample. ANOVA confirmed that the increased suggestibility in the inmates sample was due to the limited memory capacity of this latter group. Given that the results of both studies 1 and 2 are in keeping with the author's original results (Gudjonsson, 1997), this may be regarded as a confirmation of the validity of the Portuguese GSS1.

  17. Influence of wave-front sampling in adaptive optics retinal imaging

    PubMed Central

    Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K.; Pircher, Michael

    2017-01-01

    A wide range of sampling densities of the wave-front has been used in retinal adaptive optics (AO) instruments, compared to the number of corrector elements. We developed a model in order to characterize the link between number of actuators, number of wave-front sampling points and AO correction performance. Based on available data from aberration measurements in the human eye, 1000 wave-fronts were generated for the simulations. The AO correction performance in the presence of these representative aberrations was simulated for different deformable mirror and Shack Hartmann wave-front sensor combinations. Predictions of the model were experimentally tested through in vivo measurements in 10 eyes including retinal imaging with an AO scanning laser ophthalmoscope. According to our study, a ratio between wavefront sampling points and actuator elements of 2 is sufficient to achieve high resolution in vivo images of photoreceptors. PMID:28271004

  18. Taking a Broad Approach to Public Health Program Adaptation: Adapting a Family-Based Diabetes Education Program

    ERIC Educational Resources Information Center

    Reinschmidt, Kerstin M.; Teufel-Shone, Nicolette I.; Bradford, Gail; Drummond, Rebecca L.; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K.

    2010-01-01

    Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American…

  19. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses.

  20. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  1. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  2. The Canadian approach to the settlement and adaptation of immigrants.

    PubMed

    1986-01-01

    Canada has been the host to over 400,000 refugees since World War II. The settlement and adaptation process is supported by the federal government and by the majority of provincial governments. Under the national and regional Employment and Immigration Commission CEIC) settlement organizations the major programs administered to effect the adaptation of newcomers are: 1) the Adjustment Assistance Program, 2) the Immigrant Settlement and Adaptation Program, 3) the Language/Skill Training Program, and 4) the Employment Services Program. Ontario, the recipient of more than 1/2 the newcomers that arrive in Canada each year, pursues active programs in the reception of newcomers through their Welcome House Program which offers a wide range of reception services to the newcomers. The employment and unemployment experiences of refugees is very much influenced by the prevailing labor market conditions, the refugees' proficiency in the country's official languages, the amount of sympathy evoked by the media reports on the plight of refugees, the availability of people of the same ethnic origin already well settled in the country, and the adaptability of the refugees themselves. The vast majority of refugee groups that came to Canada during the last 1/4 century seem to have adjusted well economically, despite having had difficulty in entering the occupations they intended to join. It is calculated that an average of $6607 per arrival is needed to cover the CEIC program costs of 1983-1984.

  3. The Detroit Approach to Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Elkins, Bruce; Czapski, Stephen

    The report describes Detroit's Adaptive Physical Education Consortium Project in Michigan. Among the main objectives of the project are to coordinate all physical education and recreation services to the handicapped in the Detroit area; to facilitate the mainstreaming of capable handicapped individuals into existing "regular" physical…

  4. Enhancing Adaptive Filtering Approaches for Land Data Assimilation Systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent work has presented the initial application of adaptive filtering techniques to land surface data assimilation systems. Such techniques are motivated by our current lack of knowledge concerning the structure of large-scale error in either land surface modeling output or remotely-sensed estima...

  5. Adaptive E-Learning Environments: Research Dimensions and Technological Approaches

    ERIC Educational Resources Information Center

    Di Bitonto, Pierpaolo; Roselli, Teresa; Rossano, Veronica; Sinatra, Maria

    2013-01-01

    One of the most closely investigated topics in e-learning research has always been the effectiveness of adaptive learning environments. The technological evolutions that have dramatically changed the educational world in the last six decades have allowed ever more advanced and smarter solutions to be proposed. The focus of this paper is to depict…

  6. An Intelligent Tutoring System Approach to Adaptive Instructional Systems

    DTIC Science & Technology

    2005-09-01

    gr prototypes (scripts) (Schank, 1977). Many software systems have been developed that employ these representations. Many instructional theories and...specific performance or skill-acquisition. However, there are many theories about the number and type of these general abilities. Researchers...Computer Generated Forces and Behavioral Representations. 7.1.2 Role of MAMID in an Adaptive Instructional System Motivation Currently, the student

  7. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  8. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  9. Low-power metabolic equivalents estimation algorithm using adaptive acceleration sampling.

    PubMed

    Tsukahara, Mio; Nakanishi, Motofumi; Izumi, Shintaro; Nakai, Yozaburo; Kawaguchi, Hiroshi; Yoshimoto, Masahiko; Tsukahara, Mio; Nakanishi, Motofumi; Izumi, Shintaro; Nakai, Yozaburo; Kawaguchi, Hiroshi; Yoshimoto, Masahiko; Izumi, Shintaro; Nakai, Yozaburo; Kawaguchi, Hiroshi; Yoshimoto, Masahiko; Tsukahara, Mio; Nakanishi, Motofumi

    2016-08-01

    This paper describes a proposed low-power metabolic equivalent estimation algorithm that can calculate the value of metabolic equivalents (METs) from triaxial acceleration at an adaptively changeable sampling rate. This algorithm uses four rates of 32, 16, 8 and 4 Hz. The mode of switching them is decided from synthetic acceleration. Applying this proposed algorithm to acceleration measured for 1 day, we achieved the low root mean squared error (RMSE) of calculated METs, with current consumption that was 41.5 % of the value at 32 Hz, and 75.4 % of the value at 16 Hz.

  10. Liquid Water from First Principles: Validation of Different Sampling Approaches

    SciTech Connect

    Mundy, C J; Kuo, W; Siepmann, J; McGrath, M J; Vondevondele, J; Sprik, M; Hutter, J; Parrinello, M; Mohamed, F; Krack, M; Chen, B; Klein, M

    2004-05-20

    A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to assess the validity and reproducibility of different sampling approaches. These simulations include Car-Parrinello molecular dynamics simulations using the program CPMD with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born-Oppenheimer molecular dynamics using the programs CPMD and CP2K in the microcanonical ensemble, and Metropolis Monte Carlo using CP2K in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. It is found that the structural and thermodynamic properties of these simulations are in excellent agreement with each other as long as adiabatic sampling is maintained in the Car-Parrinello molecular dynamics simulations either by choosing a sufficiently small fictitious mass in the microcanonical ensemble or by Nos{acute e}-Hoover thermostats in the canonical ensemble. Using the Becke-Lee-Yang-Parr exchange and correlation energy functionals and norm-conserving Troullier-Martins or Goedecker-Teter-Hutter pseudopotentials, simulations at a fixed density of 1.0 g/cm{sup 3} and a temperature close to 315 K yield a height of the first peak in the oxygen-oxygen radial distribution function of about 3.0, a classical constant-volume heat capacity of about 70 J K{sup -1} mol{sup -1}, and a self-diffusion constant of about 0.1 Angstroms{sup 2}/ps.

  11. Image classification with densely sampled image windows and generalized adaptive multiple kernel learning.

    PubMed

    Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong

    2015-03-01

    We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings.

  12. Dissociating conflict adaptation from feature integration: a multiple regression approach.

    PubMed

    Notebaert, Wim; Verguts, Tom

    2007-10-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.

  13. A Kalman filter approach to adaptive estimation of multispectral signatures

    NASA Technical Reports Server (NTRS)

    Crane, R. B.

    1973-01-01

    The signatures of remote sensing data from agricultural crops exhibit significant non-stationarity, so that the performance of fixed parameter classifiers degenerates with time and distance from the initial training data. A class of adaptive decision-directed classifiers are being developed, based on Kalman filter theory. Limited results to date on two data sets indicate approximately a 25 to 40% reduction in rates of misclassification.

  14. A regional approach to climate adaptation in the Nile Basin

    NASA Astrophysics Data System (ADS)

    Butts, Michael B.; Buontempo, Carlo; Lørup, Jens K.; Williams, Karina; Mathison, Camilla; Jessen, Oluf Z.; Riegels, Niels D.; Glennie, Paul; McSweeney, Carol; Wilson, Mark; Jones, Richard; Seid, Abdulkarim H.

    2016-10-01

    The Nile Basin is one of the most important shared basins in Africa. Managing and developing the water resources within the basin must not only address different water uses but also the trade-off between developments upstream and water use downstream, often between different countries. Furthermore, decision-makers in the region need to evaluate and implement climate adaptation measures. Previous work has shown that the Nile flows can be highly sensitive to climate change and that there is considerable uncertainty in climate projections in the region with no clear consensus as to the direction of change. Modelling current and future changes in river runoff must address a number of challenges; including the large size of the basin, the relative scarcity of data, and the corresponding dramatic variety of climatic conditions and diversity in hydrological characteristics. In this paper, we present a methodology, to support climate adaptation on a regional scale, for assessing climate change impacts and adaptation potential for floods, droughts and water scarcity within the basin.

  15. The adaptive significance of adult neurogenesis: an integrative approach

    PubMed Central

    Konefal, Sarah; Elliot, Mick; Crespi, Bernard

    2013-01-01

    Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188

  16. The Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Xin, Tao

    2013-01-01

    The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…

  17. An Adaptive Approach to Managing Knowledge Development in a Project-Based Learning Environment

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…

  18. Machine Learning Approaches to Rare Events Sampling and Estimation

    NASA Astrophysics Data System (ADS)

    Elsheikh, A. H.

    2014-12-01

    Given the severe impacts of rare events, we try to quantitatively answer the following two questions: How can we estimate the probability of a rare event? And what are the factors affecting these probabilities? We utilize machine learning classification methods to define the failure boundary (in the stochastic space) corresponding to a specific threshold of a rare event. The training samples for the classification algorithm are obtained using multilevel splitting and Monte Carlo (MC) simulations. Once the training of the classifier is performed, a full MC simulation can be performed efficiently using the classifier as a reduced order model replacing the full physics simulator.We apply the proposed method on a standard benchmark for CO2 leakage through an abandoned well. In this idealized test case, CO2 is injected into a deep aquifer and then spreads within the aquifer and, upon reaching an abandoned well; it rises to a shallower aquifer. In current study, we try to evaluate the probability of leakage of a pre-defined amount of the injected CO2 given a heavy tailed distribution of the leaky well permeability. We show that machine learning based approaches significantly outperform direct MC and multi-level splitting methods in terms of efficiency and precision. The proposed algorithm's efficiency and reliability enabled us to perform a sensitivity analysis to the different modeling assumptions including the different prior distributions on the probability of CO2 leakage.

  19. Adaptive sampling dual terahertz comb spectroscopy using dual free-running femtosecond lasers

    PubMed Central

    Yasui, Takeshi; Ichikawa, Ryuji; Hsieh, Yi-Da; Hayashi, Kenta; Cahyadi, Harsono; Hindle, Francis; Sakaguchi, Yoshiyuki; Iwata, Tetsuo; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Minoshima, Kaoru; Inaba, Hajime

    2015-01-01

    Terahertz (THz) dual comb spectroscopy (DCS) is a promising method for high-accuracy, high-resolution, broadband THz spectroscopy because the mode-resolved THz comb spectrum includes both broadband THz radiation and narrow-line CW-THz radiation characteristics. In addition, all frequency modes of a THz comb can be phase-locked to a microwave frequency standard, providing excellent traceability. However, the need for stabilization of dual femtosecond lasers has often hindered its wide use. To overcome this limitation, here we have demonstrated adaptive-sampling THz-DCS, allowing the use of free-running femtosecond lasers. To correct the fluctuation of the time and frequency scales caused by the laser timing jitter, an adaptive sampling clock is generated by dual THz-comb-referenced spectrum analysers and is used for a timing clock signal in a data acquisition board. The results not only indicated the successful implementation of THz-DCS with free-running lasers but also showed that this configuration outperforms standard THz-DCS with stabilized lasers due to the slight jitter remained in the stabilized lasers. PMID:26035687

  20. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  1. Computational prediction of riboswitch tertiary structures including pseudoknots by RAGTOP: a hierarchical graph sampling approach.

    PubMed

    Kim, Namhee; Zahran, Mai; Schlick, Tamar

    2015-01-01

    The modular organization of RNA structure has been exploited in various computational and theoretical approaches to identify RNA tertiary (3D) motifs and assemble RNA structures. Riboswitches exemplify this modularity in terms of both structural and functional adaptability of RNA components. Here, we extend our computational approach based on tree graph sampling to the prediction of riboswitch topologies by defining additional edges to mimick pseudoknots. Starting from a secondary (2D) structure, we construct an initial graph deduced from predicted junction topologies by our data-mining algorithm RNAJAG trained on known RNAs; we sample these graphs in 3D space guided by knowledge-based statistical potentials derived from bending and torsion measures of internal loops as well as radii of gyration for known RNAs. We present graph sampling results for 10 representative riboswitches, 6 of them with pseudoknots, and compare our predictions to solved structures based on global and local RMSD measures. Our results indicate that the helical arrangements in riboswitches can be approximated using our combination of modified 3D tree graph representations for pseudoknots, junction prediction, graph moves, and scoring functions. Future challenges in the field of riboswitch prediction and design are also discussed.

  2. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  3. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive

  4. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  5. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  6. Making CORBA objects persistent: The object database adapter approach

    SciTech Connect

    Reverbel, F.C.R.

    1997-05-01

    In spite of its remarkable successes in promoting standards for distributed object systems, the Object Management Group (OMG) has not yet settled the issue of object persistence in the Object Request Broker (ORB) environment. The Common Object Request Broker Architecture (CORBA) specification briefly mentions an Object-Oriented Database Adapter that makes objects stored in an object-oriented database accessible through the ORB. This idea is pursued in the Appendix B of the ODMG standard, which identifies a number of issues involved in using an Object Database Management System (ODBMS) in a CORBA environment, and proposes an Object Database Adapter (ODA) to realize the integration of the ORB with the ODBMS. This paper discusses the design and implementation of an ODA that integrates an ORB and an ODBMS with C++ bindings. For the author`s purposes, an ODBMS is a system with programming interfaces. It may be a pure object-oriented DBMS (an OODBMS), or a combination of a relational DBMS and an object-relational mapper.

  7. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach

    NASA Astrophysics Data System (ADS)

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-05-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.

  8. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.

    PubMed

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation.

  9. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil

    PubMed Central

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315

  10. Adaptive sample size modification in clinical trials: start small then ask for more?

    PubMed

    Jennison, Christopher; Turnbull, Bruce W

    2015-12-20

    We consider sample size re-estimation in a clinical trial, in particular when there is a significant delay before the measurement of patient response. Mehta and Pocock have proposed methods in which sample size is increased when interim results fall in a 'promising zone' where it is deemed worthwhile to increase conditional power by adding more subjects. Our analysis reveals potential pitfalls in applying this approach. Mehta and Pocock use results of Chen, DeMets and Lan to identify when increasing sample size, but applying a conventional level α significance test at the end of the trial does not inflate the type I error rate: we have found the greatest gains in power per additional observation are liable to lie outside the region defined by this method. Mehta and Pocock increase sample size to achieve a particular conditional power, calculated under the current estimate of treatment effect: this leads to high increases in sample size for a small range of interim outcomes, whereas we have found it more efficient to make moderate increases in sample size over a wider range of cases. If the aforementioned pitfalls are avoided, we believe the broad framework proposed by Mehta and Pocock is valuable for clinical trial design. Working in this framework, we propose sample size rules that apply explicitly the principle of adding observations when they are most beneficial. The resulting trial designs are closely related to efficient group sequential tests for a delayed response proposed by Hampson and Jennison.

  11. An Evidence-Based Public Health Approach to Climate Change Adaptation

    PubMed Central

    Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George

    2014-01-01

    Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public

  12. A Bayesian adaptive blinded sample size adjustment method for risk differences.

    PubMed

    Hartley, Andrew Montgomery

    2015-01-01

    Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods.

  13. An adaptive deep learning approach for PPG-based identification.

    PubMed

    Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M

    2016-08-01

    Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.

  14. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    PubMed Central

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  15. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  16. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  17. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  18. Real-time nutrient monitoring in rivers: adaptive sampling strategies, technological challenges and future directions

    NASA Astrophysics Data System (ADS)

    Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Bradley, Chris

    2016-04-01

    Excessive nutrient concentrations in river waters threaten aquatic ecosystem functioning and can pose substantial risks to human health. Robust monitoring strategies are therefore required to generate reliable estimates of river nutrient loads and to improve understanding of the catchment processes that drive spatiotemporal patterns in nutrient fluxes. Furthermore, these data are vital for prediction of future trends under changing environmental conditions and thus the development of appropriate mitigation measures. In recent years, technological developments have led to an increase in the use of continuous in-situ nutrient analysers, which enable measurements at far higher temporal resolutions than can be achieved with discrete sampling and subsequent laboratory analysis. However, such instruments can be costly to run and difficult to maintain (e.g. due to high power consumption and memory requirements), leading to trade-offs between temporal and spatial monitoring resolutions. Here, we highlight how adaptive monitoring strategies, comprising a mixture of temporal sample frequencies controlled by one or more 'trigger variables' (e.g. river stage, turbidity, or nutrient concentration), can advance our understanding of catchment nutrient dynamics while simultaneously overcoming many of the practical and economic challenges encountered in typical in-situ river nutrient monitoring applications. We present examples of short-term variability in river nutrient dynamics, driven by complex catchment behaviour, which support our case for the development of monitoring systems that can adapt in real-time to rapid environmental changes. In addition, we discuss the advantages and disadvantages of current nutrient monitoring techniques, and suggest new research directions based on emerging technologies and highlight how these might improve: 1) monitoring strategies, and 2) understanding of linkages between catchment processes and river nutrient fluxes.

  19. Adaptive sampling of CT data for myocardial blood flow estimation from dose-reduced dynamic CT

    NASA Astrophysics Data System (ADS)

    Modgil, Dimple; Bindschadler, Michael D.; Alessio, Adam M.; La Rivière, Patrick J.

    2015-03-01

    Quantification of myocardial blood flow (MBF) can aid in the diagnosis and treatment of coronary artery disease (CAD). However, there are no widely accepted clinical methods for estimating MBF. Dynamic CT holds the promise of providing a quick and easy method to measure MBF quantitatively, however the need for repeated scans has raised concerns about the potential for high radiation dose. In our previous work, we explored techniques to reduce the patient dose by either uniformly reducing the tube current or by uniformly reducing the number of temporal frames in the dynamic CT sequence. These dose reduction techniques result in very noisy data, which can give rise to large errors in MBF estimation. In this work, we seek to investigate whether nonuniformly varying the tube current or sampling intervals can yield more accurate MBF estimates. Specifically, we try to minimize the dose and obtain the most accurate MBF estimate through addressing the following questions: when in the time attenuation curve (TAC) should the CT data be collected and at what tube current(s). We hypothesize that increasing the sampling rate and/or tube current during the time frames when the myocardial CT number is most sensitive to the flow rate, while reducing them elsewhere, can achieve better estimation accuracy for the same dose. We perform simulations of contrast agent kinetics and CT acquisitions to evaluate the relative MBF estimation performance of several clinically viable adaptive acquisition methods. We found that adaptive temporal and tube current sequences can be performed that impart an effective dose of about 5 mSv and allow for reductions in MBF estimation RMSE on the order of 11% compared to uniform acquisition sequences with comparable or higher radiation doses.

  20. Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation

    NASA Technical Reports Server (NTRS)

    Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.

    2004-01-01

    Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.

  1. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  2. Writing Samples Viewed from Different Perspectives: An Approach to Validity.

    ERIC Educational Resources Information Center

    Carlson, Sybil B.

    The objective description and identification of variables that meaningfully distinguish reasoning skills couched in written discourse was studied, comparing scores obtained from different perspectives on the same writing samples. A total of 406 writing samples on 2 topics by 203 students who had taken the Graduate Record Examinations, mostly…

  3. Adaptation of a weighted regression approach to evaluate water quality trends in anestuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...

  4. Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...

  5. Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning

    ERIC Educational Resources Information Center

    Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…

  6. A Hybrid Approach for Supporting Adaptivity in E-Learning Environments

    ERIC Educational Resources Information Center

    Al-Omari, Mohammad; Carter, Jenny; Chiclana, Francisco

    2016-01-01

    Purpose: The purpose of this paper is to identify a framework to support adaptivity in e-learning environments. The framework reflects a novel hybrid approach incorporating the concept of the event-condition-action (ECA) model and intelligent agents. Moreover, a system prototype is developed reflecting the hybrid approach to supporting adaptivity…

  7. Developmental Structuralist Approach to the Classification of Adaptive and Pathologic Personality Organizations: Infancy and Early Childhood.

    ERIC Educational Resources Information Center

    Greenspan, Stanley I.; Lourie, Reginald S.

    This paper applies a developmental structuralist approach to the classification of adaptive and pathologic personality organizations and behavior in infancy and early childhood, and it discusses implications of this approach for preventive intervention. In general, as development proceeds, the structural capacity of the developing infant and child…

  8. Adaptation to floods in future climate: a practical approach

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, Joanna; Romanowicz, Renata; Radon, Radoslaw; Hisdal, Hege

    2016-04-01

    In this study some aspects of the application of the 1D hydraulic model are discussed with a focus on its suitability for flood adaptation under future climate conditions. The Biała Tarnowska catchment is used as a case study. A 1D hydraulic model is developed for the evaluation of inundation extent and risk maps in future climatic conditions. We analyse the following flood indices: (i) extent of inundation area; (ii) depth of water on flooded land; (iii) the flood wave duration; (iv) the volume of a flood wave over the threshold value. In this study we derive a model cross-section geometry following the results of primary research based on a 500-year flood inundation extent. We compare two methods of localisation of cross-sections from the point of view of their suitability to the derivation of the most precise inundation outlines. The aim is to specify embankment heights along the river channel that would protect the river valley in the most vulnerable locations under future climatic conditions. We present an experimental design for scenario analysis studies and uncertainty reduction options for future climate projections obtained from the EUROCORDEX project. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.

  9. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    DTIC Science & Technology

    2009-09-01

    optimization through the evolution of a dynamical system. Some existing approaches do not fit under the framework we propose in this chap- ter. A...parameters are coupled among robots, we must consider the evolution of all the robots’ parameters together. Let = [ a]. (4.39) be a concatenated...dynamics * Synchronous evolution of equa- tions * Exact Voronoi cells computed from exact positions of all Voronoi neighbors * Exact integrals over

  10. An adaptive management approach to controlling suburban deer

    USGS Publications Warehouse

    Nielson, C.K.; Porter, W.F.; Underwood, H.B.

    1997-01-01

    Distance sight-resight sampling has particular relevance to aerial surveys, in which height above ground and aircraft speed make the critical assumption of certain detection on the track-line unrealistic. Recent developments in distance sight-resight theory have left practical issues related to data collection as the major impediment to widespread use of distance sight-resight sampling in aerial surveys. We describe and evaluate a system to automatically log, store, and process data from distance sight-resight aerial surveys. The system has a primary digital system and a secondary audio system. The digital system comprises a sighting 'gun' and small keypad for each observer, a global positioning system (GPS) receiver, and an altimeter interface, all linked to a central laptop computer. The gun is used to record time and angle of declination from the horizon of sighted groups of animals as they pass the aircraft. The keypad is used to record information on species and group size. The altimeter interface records altitude from the aircraft's radar altimeter, and the GPS receiver provides location data at user-definable intervals. We wrote software to import data into a database and convert it into a form appropriate for distance sight-resight analyses. Perpendicular distance of sighted groups of animals from the flight path is calculated from altitude and angle of declination. Time, angle of declination, species, and group size of sightings by independent observers on the same side of the aircraft are used as criteria to classify single and duplicate sightings, allowing testing of the critical distance sampling assumption (g(0)=1) and estimation of g(0) if that assumption fails. An audio system comprising headphones for each observer and a 4-track tape recorder allows recording of data that are difficult to accommodate in the digital system and provides a backup to the digital system. We evaluated the system by conducting experimental surveys and reviewing results

  11. Farms adaptation to changes in flood risk: a management approach

    NASA Astrophysics Data System (ADS)

    Pivot, Jean-Marc; Martin, Philippe

    2002-10-01

    Creating flood expansion areas e.g. for the protection of urban areas from flooding involves a localised increase in risk which may require farmers to be compensated for crop damage or other losses. With this in mind, the paper sets out the approach used to study the problem and gives results obtained from a survey of farms liable to flooding in central France. The approach is based on a study of decisions made by farmers in situations of uncertainty, using the concept of 'model of action'. The results show that damage caused to farming areas by flooding should be considered both at field level and at farm level. The damage caused to the field depends on the flood itself, the fixed characteristics of the field, and the plant species cultivated. However, the losses to the farm taken as a whole can differ considerably from those for the flooded field, due to 'knock-on' effects on farm operations which depend on the internal organization, the availability of production resources, and the farmer's objectives, both for the farm as a whole and for its individual enterprises. Three main strategies regarding possible flood events were identified. Reasons for choosing one of these include the way the farmer perceives the risk and the size of the area liable to flooding. Finally, the formalisation of farm system management in the face of uncertainty, especially due to flooding, enables compensation to be calculated for farmers whose land is affected by the creation of flood expansion areas.

  12. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  13. An Empirical Comparison of an Expert Systems Approach and an IRT Approach to Computer-Based Adaptive Mastery Testing.

    ERIC Educational Resources Information Center

    Luk, HingKwan

    This study examined whether an expert system approach involving intelligent selection of items (EXSPRT-I) is as efficient as item response theory (IRT) based three-parameter adaptive mastery testing (AMT) when there are enough subjects to estimate the three IRT item parameters for all items in the test and when subjects in the item parameter…

  14. Adaptive Thouless-Anderson-Palmer approach to inverse Ising problems with quenched random fields

    NASA Astrophysics Data System (ADS)

    Huang, Haiping; Kabashima, Yoshiyuki

    2013-06-01

    The adaptive Thouless-Anderson-Palmer equation is derived for inverse Ising problems in the presence of quenched random fields. We test the proposed scheme on Sherrington-Kirkpatrick, Hopfield, and random orthogonal models and find that the adaptive Thouless-Anderson-Palmer approach allows accurate inference of quenched random fields whose distribution can be either Gaussian or bimodal. In particular, another competitive method for inferring external fields, namely, the naive mean field method with diagonal weights, is compared and discussed.

  15. The Application of Adaptive Sampling and Analysis Program (ASAP) Techniques to NORM Sites

    SciTech Connect

    Johnson, Robert; Smith, Karen P.; Quinn, John

    1999-10-29

    The results from the Michigan demonstration establish that this type of approach can be very effective for NORM sites. The advantages include (1) greatly reduced per sample analytical costs; (2) a reduced reliance on soil sampling and ex situ gamma spectroscopy analyses; (3) the ability to combine characterization with remediation activities in one fieldwork cycle; (4) improved documentation; and (5) ultimately better remediation, as measured by greater precision in delineating soils that are not in compliance with requirements from soils that are in compliance. In addition, the demonstration showed that the use of real-time technologies, such as the RadInSoil, can facilitate the implementation of a Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM)-based final status survey program

  16. The diversity of gendered adaptation strategies to climate change of Indian farmers: A feminist intersectional approach.

    PubMed

    Ravera, Federica; Martín-López, Berta; Pascual, Unai; Drucker, Adam

    2016-12-01

    This paper examines climate change adaptation and gender issues through an application of a feminist intersectional approach. This approach permits the identification of diverse adaptation responses arising from the existence of multiple and fragmented dimensions of identity (including gender) that intersect with power relations to shape situation-specific interactions between farmers and ecosystems. Based on results from contrasting research cases in Bihar and Uttarakhand, India, this paper demonstrates, inter alia, that there are geographically determined gendered preferences and adoption strategies regarding adaptation options and that these are influenced by the socio-ecological context and institutional dynamics. Intersecting identities, such as caste, wealth, age and gender, influence decisions and reveal power dynamics and negotiation within the household and the community, as well as barriers to adaptation among groups. Overall, the findings suggest that a feminist intersectional approach does appear to be useful and worth further exploration in the context of climate change adaptation. In particular, future research could benefit from more emphasis on a nuanced analysis of the intra-gender differences that shape adaptive capacity to climate change.

  17. Unsupervised learning approach to adaptive differential pulse code modulation.

    PubMed

    Griswold, N C; Sayood, K

    1982-04-01

    This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.

  18. Experimental Approaches to Microarray Analysis of Tumor Samples

    ERIC Educational Resources Information Center

    Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.

    2008-01-01

    Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…

  19. Simplification of iron speciation in wine samples: a spectrophotometric approach.

    PubMed

    López-López, José A; Albendín, Gemma; Arufe, María I; Mánuel-Vez, Manuel P

    2015-05-13

    A simple direct spectrophotometric method was developed for the analysis of Fe(II) and total Fe in wine samples. This method is based on the formation of an Fe(II) complex with 2,2'-dipyridylketone picolinoylhydrazone (DPKPH), which shows a maximum green-blue absorption (λ = 700 nm) at pH 4.9. Operative conditions for the batch procedure were investigated including reagent concentration, buffer solutions, and wavelength. The tolerance limits of foreign ions and sample matrix have been also evaluated. Limits of detection and quantification were 0.005 and 0.017 mg L(-1) of Fe(II), respectively, allowing its determination in real wine samples. Finally, the proposed method was used in the analysis of white, rose, and red wines. Results were compared with a reference method of Commission Regulation (ECC) No. 2676/90 of September 1990 determining European Community methods for the analysis of wines for Fe analysis, showing the reliability of the proposed method in Fe analysis in wine samples.

  20. Land-based approach to evaluate sustainable land management and adaptive capacity of ecosystems/lands

    NASA Astrophysics Data System (ADS)

    Kust, German; Andreeva, Olga

    2015-04-01

    A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach

  1. [An ecogenetic approach to studying the adaptation and human health].

    PubMed

    Novoradovskiĭ, A G; Agapova, R K; Spitsyn, V A; Ispolatov, A D; Shenin, V A

    1992-04-01

    Genetic markers--blood groups ABO, RH, MN; serum proteins HP, PI, TF, C3; erythrocyte enzymes ACP1, ESD, AK1, PGM1, GLO1, PGD, PGP; and the other: PTC-tasting, ear wax types and color vision, were studied in two aboriginal Buryatian populations of Baikal Lake region: in Chitinskaya and Irkutskaya Provinces. Two samples were further divided into subgroups, according to their health status: "healthy", "indefinite" and "sick" by means of special regression procedure. The "healthy" subgroup of the Chitinskaya Province population is characterized by higher frequencies of PTC-tasters: 0.871 vs. 0.757 in the "sick" part (chi 2 = 5.36, p less than 0.05); higher frequency of the phenotype PI M1M1: 0.734 in "healthy" vs. 0.547 in "sick" (chi 2 = 8.89, p less than 0.01); also, lower frequency of the PI M1M2 phenotype: 0.148 and 0.299, respectively (chi 2 = 7.49, p less than 0.01); the frequencies of the phenotype TF C2C2 are: 0.015 and 0.076 (chi 2 = 5.48, p less than 0.05). In Irkutskaya Province population differences between "healthy" and "sick" subgroups were discovered for blood group AB: "healthy" 0.046 and "sick"--0.175 (chi 2 = 11.28, p less than 0.010); for GC (1F-2)--0.214 and 0.116 (chi 2 = 4.45, p less than 0.05). Some other differences between "healthy" and "sick" in both populations are not significant. Some trends concerning heterozygosity in loci--GC, PGM, TF were discovered. The results are considered from the viewpoint of higher fitness of some genetic traits in the populations studied.

  2. The Colorado Climate Preparedness Project: A Systematic Approach to Assessing Efforts Supporting State-Level Adaptation

    NASA Astrophysics Data System (ADS)

    Klein, R.; Gordon, E.

    2010-12-01

    Scholars and policy analysts often contend that an effective climate adaptation strategy must entail "mainstreaming," or incorporating responses to possible climate impacts into existing planning and management decision frameworks. Such an approach, however, makes it difficult to assess the degree to which decisionmaking entities are engaging in adaptive activities that may or may not be explicitly framed around a changing climate. For example, a drought management plan may not explicitly address climate change, but the activities and strategies outlined in it may reduce vulnerabilities posed by a variable and changing climate. Consequently, to generate a strategic climate adaptation plan requires identifying the entire suite of activities that are implicitly linked to climate and may affect adaptive capacity within the system. Here we outline a novel, two-pronged approach, leveraging social science methods, to understanding adaptation throughout state government in Colorado. First, we conducted a series of interviews with key actors in state and federal government agencies, non-governmental organizations, universities, and other entities engaged in state issues. The purpose of these interviews was to elicit information about current activities that may affect the state’s adaptive capacity and to identify future climate-related needs across the state. Second, we have developed an interactive database cataloging organizations, products, projects, and people actively engaged in adaptive planning and policymaking that are relevant to the state of Colorado. The database includes a wiki interface, helping create a dynamic component that will enable frequent updating as climate-relevant information emerges. The results of this project are intended to paint a clear picture of sectors and agencies with higher and lower levels of adaptation awareness and to provide a roadmap for the next gubernatorial administration to pursue a more sophisticated climate adaptation agenda

  3. Foetal blood sampling. Practical approach to management of foetal distress.

    PubMed

    Coltart, T M; Trickey, N R; Beard, R W

    1969-02-08

    The practical application of foetal blood sampling in the routine management of patients in labour has been reviewed in a six-month survey, during which time 1,668 patients were delivered at Queen Charlotte's Hospital.Foetal acidaemia (pH 7.25 or less) occurred in 45 of the 295 patients who showed clinical signs of foetal distress. Foetal tachycardia was the presenting sign in 33 of these 45 patients, underlining the importance of this physical sign. Foetal acidaemia in association with clinical foetal distress occurred twice as often in patients who had complications of pregnancy and who were therefore regarded as obstetrically "at risk" as it did in patients who were obstetrically "normal" No cases of acidaemia were detected in any of the foetal blood samples performed routinely on "at-risk" patients in the absence of clinical foetal distress.

  4. An effective plasma membrane proteomics approach for small tissue samples

    PubMed Central

    Smolders, Katrien; Lombaert, Nathalie; Valkenborg, Dirk; Baggerman, Geert; Arckens, Lutgarde

    2015-01-01

    Advancing the quest for new drug targets demands the development of innovative plasma membrane proteome research strategies applicable to small, functionally defined tissue samples. Biotinylation of acute tissue slices and streptavidin pull-down followed by shotgun proteomics allowed the selective extraction and identification of >1,600 proteins of which >60% are associated with the plasma membrane, including (G-protein coupled) receptors, ion channels and transporters, and this from mm3-scale tissue. PMID:26047021

  5. High Volume Air Sampling for Viral Aerosols: A Comparative Approach

    DTIC Science & Technology

    2010-03-01

    low, with the cotton swabbing only recovering 27.7 percent of the BA on the surface (Rose, Jensen, Peterson, Banerjee, & Arduino , 2004). A follow-on...BA were present on the surface (Hodges, Rose, Peterson, Noble-Wang, & Arduino , 2006). These lower sensitivities at low concentrations could be a...monitored during each sample collection period. Ambient pressure data was obtained hourly for Edmonton, AB from the Canadian Weather Service

  6. French Adaptation of the Narcissistic Personality Inventory in a Belgian French-Speaking Sample.

    PubMed

    Braun, Stéphanie; Kempenaers, Chantal; Linkowski, Paul; Loas, Gwenolé

    2016-01-01

    The Narcissistic Personality Inventory (NPI) is the most widely used self-report scale to assess the construct of narcissism, especially in its grandiosity expression. Over the years, several factor models have been proposed in order to improve the understanding of the multidimensional aspect of this construct. The available data are heterogeneous, suggesting one to at least seven factors. In this study, we propose a French adaptation of the NPI submitted to a sample of Belgian French-speaking students (n = 942). We performed a principal component analysis on a tetrachoric correlation matrix to explore its factor structure. Unlike previous studies, our study shows that a first factor explains the largest part of the variance. Internal consistency is excellent and we reproduced the sex differences reported when using the original scale. Correlations with social desirability are taken into account in the interpretation of our results. Altogether, the results of this study support a unidimensional structure for the NPI using the total score as a self-report measure of the Narcissistic Personality Disorder in its grandiose form. Future studies including confirmatory factor analysis and gender invariance measurement are also discussed.

  7. French Adaptation of the Narcissistic Personality Inventory in a Belgian French-Speaking Sample

    PubMed Central

    Braun, Stéphanie; Kempenaers, Chantal; Linkowski, Paul; Loas, Gwenolé

    2016-01-01

    The Narcissistic Personality Inventory (NPI) is the most widely used self-report scale to assess the construct of narcissism, especially in its grandiosity expression. Over the years, several factor models have been proposed in order to improve the understanding of the multidimensional aspect of this construct. The available data are heterogeneous, suggesting one to at least seven factors. In this study, we propose a French adaptation of the NPI submitted to a sample of Belgian French-speaking students (n = 942). We performed a principal component analysis on a tetrachoric correlation matrix to explore its factor structure. Unlike previous studies, our study shows that a first factor explains the largest part of the variance. Internal consistency is excellent and we reproduced the sex differences reported when using the original scale. Correlations with social desirability are taken into account in the interpretation of our results. Altogether, the results of this study support a unidimensional structure for the NPI using the total score as a self-report measure of the Narcissistic Personality Disorder in its grandiose form. Future studies including confirmatory factor analysis and gender invariance measurement are also discussed. PMID:28066299

  8. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    SciTech Connect

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  9. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  10. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  11. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  12. Evaluating adaptive governance approaches to sustainable water management in north-west Thailand.

    PubMed

    Clark, Julian R A; Semmahasak, Chutiwalanch

    2013-04-01

    Adaptive governance is advanced as a potent means of addressing institutional fit of natural resource systems with prevailing modes of political-administrative management. Its advocates also argue that it enhances participatory and learning opportunities for stakeholders over time. Yet an increasing number of studies demonstrate real difficulties in implementing adaptive governance 'solutions'. This paper builds on these debates by examining the introduction of adaptive governance to water management in Chiang Mai province, north-west Thailand. The paper considers, first, the limitations of current water governance modes at the provincial scale, and the rationale for implementation of an adaptive approach. The new approach is then critically examined, with its initial performance and likely future success evaluated by (i) analysis of water stakeholders' opinions of its first year of operation; and (ii) comparison of its governance attributes against recent empirical accounts of implementation difficulty and failure of adaptive governance of natural resource management more generally. The analysis confirms the potentially significant role that the new approach can play in brokering and resolving the underlying differences in stakeholder representation and knowledge construction at the heart of the prevailing water governance modes in north-west Thailand.

  13. An Enhanced Adaptive Management Approach for Remediation of Legacy Mercury in the South River

    PubMed Central

    Foran, Christy M.; Baker, Kelsie M.; Grosso, Nancy R.; Linkov, Igor

    2015-01-01

    Uncertainties about future conditions and the effects of chosen actions, as well as increasing resource scarcity, have been driving forces in the utilization of adaptive management strategies. However, many applications of adaptive management have been criticized for a number of shortcomings, including a limited ability to learn from actions and a lack of consideration of stakeholder objectives. To address these criticisms, we supplement existing adaptive management approaches with a decision-analytical approach that first informs the initial selection of management alternatives and then allows for periodic re-evaluation or phased implementation of management alternatives based on monitoring information and incorporation of stakeholder values. We describe the application of this enhanced adaptive management (EAM) framework to compare remedial alternatives for mercury in the South River, based on an understanding of the loading and behavior of mercury in the South River near Waynesboro, VA. The outcomes show that the ranking of remedial alternatives is influenced by uncertainty in the mercury loading model, by the relative importance placed on different criteria, and by cost estimates. The process itself demonstrates that a decision model can link project performance criteria, decision-maker preferences, environmental models, and short- and long-term monitoring information with management choices to help shape a remediation approach that provides useful information for adaptive, incremental implementation. PMID:25665032

  14. A flexible low-complexity device adaptation approach for data presentation

    NASA Astrophysics Data System (ADS)

    Rosenbaum, René; Gimenez, Alfredo; Schumann, Heidrun; Hamann, Bernd

    2011-01-01

    Visual data presentations require adaptation for appropriate display on a viewing device that is limited in re- sources such as computing power, screen estate, and/or bandwidth. Due to the complexity of suitable adaptation, the few proposed solutions available are either too resource-intensive or in exible to be applied broadly. Eective use and acceptance of data visualization on constrained viewing devices require adaptation approaches that are tailored to the requirements of the user and the capabilities of the viewing device. We propose a predictive device adaptation approach that takes advantage of progressive data renement. The approach relies on hierarchical data structures that are created once and used multiple times. By incrementally reconstructing the visual presentation on the client with increasing levels of detail and resource utilization, we can determine when to truncate the renement of detail so as to use the resources of the device to their full capacities. To determine when to nish the renement for a particular device, we introduce a prole-based strategy which also considers user preferences. We discuss the whole adaptation process from the storage of the data into a scalable structure to the presentation on the respective viewing device. This particular implementation is shown for two common data visualization methods, and empirical results we obtained from our experiments are presented and discussed.

  15. An enhanced adaptive management approach for remediation of legacy mercury in the South River.

    PubMed

    Foran, Christy M; Baker, Kelsie M; Grosso, Nancy R; Linkov, Igor

    2015-01-01

    Uncertainties about future conditions and the effects of chosen actions, as well as increasing resource scarcity, have been driving forces in the utilization of adaptive management strategies. However, many applications of adaptive management have been criticized for a number of shortcomings, including a limited ability to learn from actions and a lack of consideration of stakeholder objectives. To address these criticisms, we supplement existing adaptive management approaches with a decision-analytical approach that first informs the initial selection of management alternatives and then allows for periodic re-evaluation or phased implementation of management alternatives based on monitoring information and incorporation of stakeholder values. We describe the application of this enhanced adaptive management (EAM) framework to compare remedial alternatives for mercury in the South River, based on an understanding of the loading and behavior of mercury in the South River near Waynesboro, VA. The outcomes show that the ranking of remedial alternatives is influenced by uncertainty in the mercury loading model, by the relative importance placed on different criteria, and by cost estimates. The process itself demonstrates that a decision model can link project performance criteria, decision-maker preferences, environmental models, and short- and long-term monitoring information with management choices to help shape a remediation approach that provides useful information for adaptive, incremental implementation.

  16. A boosting approach for adapting the sparsity of risk prediction signatures based on different molecular levels.

    PubMed

    Sariyar, Murat; Schumacher, Martin; Binder, Harald

    2014-06-01

    Risk prediction models can link high-dimensional molecular measurements, such as DNA methylation, to clinical endpoints. For biological interpretation, often a sparse fit is desirable. Different molecular aggregation levels, such as considering DNA methylation at the CpG, gene, or chromosome level, might demand different degrees of sparsity. Hence, model building and estimation techniques should be able to adapt their sparsity according to the setting. Additionally, underestimation of coefficients, which is a typical problem of sparse techniques, should also be addressed. We propose a comprehensive approach, based on a boosting technique that allows a flexible adaptation of model sparsity and addresses these problems in an integrative way. The main motivation is to have an automatic sparsity adaptation. In a simulation study, we show that this approach reduces underestimation in sparse settings and selects more adequate model sizes than the corresponding non-adaptive boosting technique in non-sparse settings. Using different aggregation levels of DNA methylation data from a study in kidney carcinoma patients, we illustrate how automatically selected values of the sparsity tuning parameter can reflect the underlying structure of the data. In addition to that, prediction performance and variable selection stability is compared to the non-adaptive boosting approach.

  17. A fragment-based approach to the SAMPL3 Challenge

    NASA Astrophysics Data System (ADS)

    Kulp, John L.; Blumenthal, Seth N.; Wang, Qiang; Bryan, Richard L.; Guarnieri, Frank

    2012-05-01

    The success of molecular fragment-based design depends critically on the ability to make predictions of binding poses and of affinity ranking for compounds assembled by linking fragments. The SAMPL3 Challenge provides a unique opportunity to evaluate the performance of a state-of-the-art fragment-based design methodology with respect to these requirements. In this article, we present results derived from linking fragments to predict affinity and pose in the SAMPL3 Challenge. The goal is to demonstrate how incorporating different aspects of modeling protein-ligand interactions impact the accuracy of the predictions, including protein dielectric models, charged versus neutral ligands, ΔΔGs solvation energies, and induced conformational stress. The core method is based on annealing of chemical potential in a Grand Canonical Monte Carlo (GC/MC) simulation. By imposing an initially very high chemical potential and then automatically running a sequence of simulations at successively decreasing chemical potentials, the GC/MC simulation efficiently discovers statistical distributions of bound fragment locations and orientations not found reliably without the annealing. This method accounts for configurational entropy, the role of bound water molecules, and results in a prediction of all the locations on the protein that have any affinity for the fragment. Disregarding any of these factors in affinity-rank prediction leads to significantly worse correlation with experimentally-determined free energies of binding. We relate three important conclusions from this challenge as applied to GC/MC: (1) modeling neutral ligands—regardless of the charged state in the active site—produced better affinity ranking than using charged ligands, although, in both cases, the poses were almost exactly overlaid; (2) simulating explicit water molecules in the GC/MC gave better affinity and pose predictions; and (3) applying a ΔΔGs solvation correction further improved the ranking of the

  18. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  19. Acoustic sleepiness detection: framework and validation of a speech-adapted pattern recognition approach.

    PubMed

    Krajewski, Jarek; Batliner, Anton; Golz, Martin

    2009-08-01

    This article describes a general framework for detecting sleepiness states on the basis of prosody, articulation, and speech-quality-related speech characteristics. The advantages of this automatic real-time approach are that obtaining speech data is nonobstrusive and is free from sensor application and calibration efforts. Different types of acoustic features derived from speech, speaker, and emotion recognition were employed (frame-level-based speech features). Combing these features with high-level contour descriptors, which capture the temporal information of frame-level descriptor contours, results in 45,088 features per speech sample. In general, the measurement process follows the speech-adapted steps of pattern recognition: (1) recording speech, (2) preprocessing, (3) feature computation (using perceptual and signal-processing-related features such as, e.g., fundamental frequency, intensity, pause patterns, formants, and cepstral coefficients), (4) dimensionality reduction, (5) classification, and (6) evaluation. After a correlation-filter-based feature subset selection employed on the feature space in order to find most relevant features, different classification models were trained. The best model-namely, the support-vector machine-achieved 86.1% classification accuracy in predicting sleepiness in a sleep deprivation study (two-class problem, N=12; 01.00-08.00 a.m.).

  20. Neural Network Aided Adaptive Extended Kalman Filtering Approach for DGPS Positioning

    NASA Astrophysics Data System (ADS)

    Jwo, Dah-Jing; Huang, Hung-Chih

    2004-09-01

    The extended Kalman filter, when employed in the GPS receiver as the navigation state estimator, provides optimal solutions if the noise statistics for the measurement and system are completely known. In practice, the noise varies with time, which results in performance degradation. The covariance matching method is a conventional adaptive approach for estimation of noise covariance matrices. The technique attempts to make the actual filter residuals consistent with their theoretical covariance. However, this innovation-based adaptive estimation shows very noisy results if the window size is small. To resolve the problem, a multilayered neural network is trained to identify the measurement noise covariance matrix, in which the back-propagation algorithm is employed to iteratively adjust the link weights using the steepest descent technique. Numerical simulations show that based on the proposed approach the adaptation performance is substantially enhanced and the positioning accuracy is substantially improved.

  1. A multi-band environment-adaptive approach to noise suppression for cochlear implants.

    PubMed

    Saki, Fatemeh; Mirzahasanloo, Taher; Kehtarnavaz, Nasser

    2014-01-01

    This paper presents an improved environment-adaptive noise suppression solution for the cochlear implants speech processing pipeline. This improvement is achieved by using a multi-band data-driven approach in place of a previously developed single-band data-driven approach. Seven commonly encountered noisy environments of street, car, restaurant, mall, bus, pub and train are considered to quantify the improvement. The results obtained indicate about 10% improvement in speech quality measures.

  2. Development of an Assistance Environment for Tutors Based on a Co-Adaptive Design Approach

    ERIC Educational Resources Information Center

    Lavoue, Elise; George, Sebastien; Prevot, Patrick

    2012-01-01

    In this article, we present a co-adaptive design approach named TE-Cap (Tutoring Experience Capitalisation) that we applied for the development of an assistance environment for tutors. Since tasks assigned to tutors in educational contexts are not well defined, we are developing an environment which responds to needs which are not precisely…

  3. Complexity Thinking in PE: Game-Centred Approaches, Games as Complex Adaptive Systems, and Ecological Values

    ERIC Educational Resources Information Center

    Storey, Brian; Butler, Joy

    2013-01-01

    Background: This article draws on the literature relating to game-centred approaches (GCAs), such as Teaching Games for Understanding, and dynamical systems views of motor learning to demonstrate a convergence of ideas around games as complex adaptive learning systems. This convergence is organized under the title "complexity thinking"…

  4. Adapting Evidence-Based Mental Health Treatments in Community Settings: Preliminary Results from a Partnership Approach

    ERIC Educational Resources Information Center

    Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B., Jr.

    2009-01-01

    This article describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for…

  5. An Enhanced Approach to Combine Item Response Theory with Cognitive Diagnosis in Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Zheng, Chanjin; Chang, Hua-Hua

    2014-01-01

    Computerized adaptive testing offers the possibility of gaining information on both the overall ability and cognitive profile in a single assessment administration. Some algorithms aiming for these dual purposes have been proposed, including the shadow test approach, the dual information method (DIM), and the constraint weighted method. The…

  6. Three Authentic Curriculum-Integration Approaches to Bird Adaptations That Incorporate Technology and Thinking Skills

    ERIC Educational Resources Information Center

    Rule, Audrey C.; Barrera, Manuel T., III

    2008-01-01

    Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…

  7. EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; And Others

    Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…

  8. Project Adapt: A Developmental Approach to Psycho-Motor Transfer. A Guide to Movement and Learning.

    ERIC Educational Resources Information Center

    Steele, Wah-Leeta

    Described is Project ADAPT (A Developmental Approach to Psychomotor Transfer), a validated program used with 808 primary grade children, some with learning difficulties, over a 3-year period to enhance academic readiness and self esteem through psychomotor training. An introductory project summary explains program objectives, the needs assessment…

  9. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  10. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  11. Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach

    SciTech Connect

    Perez, Danny; Vernon, Louis J.

    2012-04-04

    Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are

  12. ASICs Approach for the Implementation of a Symmetric Triangular Fuzzy Coprocessor and Its Application to Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan

    1997-01-01

    This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.

  13. Adaptive geostatistical sampling enables efficient identification of malaria hotspots in repeated cross-sectional surveys in rural Malawi

    PubMed Central

    Chipeta, Michael G.; McCann, Robert S.; Phiri, Kamija S.; van Vugt, Michèle; Takken, Willem; Diggle, Peter; Terlouw, Anja D.

    2017-01-01

    Introduction In the context of malaria elimination, interventions will need to target high burden areas to further reduce transmission. Current tools to monitor and report disease burden lack the capacity to continuously detect fine-scale spatial and temporal variations of disease distribution exhibited by malaria. These tools use random sampling techniques that are inefficient for capturing underlying heterogeneity while health facility data in resource-limited settings are inaccurate. Continuous community surveys of malaria burden provide real-time results of local spatio-temporal variation. Adaptive geostatistical design (AGD) improves prediction of outcome of interest compared to current random sampling techniques. We present findings of continuous malaria prevalence surveys using an adaptive sampling design. Methods We conducted repeated cross sectional surveys guided by an adaptive sampling design to monitor the prevalence of malaria parasitaemia and anaemia in children below five years old in the communities living around Majete Wildlife Reserve in Chikwawa district, Southern Malawi. AGD sampling uses previously collected data to sample new locations of high prediction variance or, where prediction exceeds a set threshold. We fitted a geostatistical model to predict malaria prevalence in the area. Findings We conducted five rounds of sampling, and tested 876 children aged 6–59 months from 1377 households over a 12-month period. Malaria prevalence prediction maps showed spatial heterogeneity and presence of hotspots—where predicted malaria prevalence was above 30%; predictors of malaria included age, socio-economic status and ownership of insecticide-treated mosquito nets. Conclusions Continuous malaria prevalence surveys using adaptive sampling increased malaria prevalence prediction accuracy. Results from the surveys were readily available after data collection. The tool can assist local managers to target malaria control interventions in areas with the

  14. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  15. The Parent Version of the Preschool Social Skills Rating System: Psychometric Analysis and Adaptation with a German Preschool Sample

    ERIC Educational Resources Information Center

    Hess, Markus; Scheithauer, Herbert; Kleiber, Dieter; Wille, Nora; Erhart, Michael; Ravens-Sieberer, Ulrike

    2014-01-01

    The Social Skills Rating System (SSRS) developed by Gresham and Elliott (1990) is a multirater, norm-referenced instrument measuring social skills and adaptive behavior in preschool children. The aims of the present study were (a) to test the factorial structure of the Parent Form of the SSRS for the first time with a German preschool sample (391…

  16. Some Features of the Sampling Distribution of the Ability Estimate in Computerized Adaptive Testing According to Two Stopping Rules.

    ERIC Educational Resources Information Center

    Blais, Jean-Guy; Raiche, Gilles

    This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…

  17. The role of adaptive management as an operational approach for resource management agencies

    USGS Publications Warehouse

    Johnson, B.L.

    1999-01-01

    In making resource management decisions, agencies use a variety of approaches that involve different levels of political concern, historical precedence, data analyses, and evaluation. Traditional decision-making approaches have often failed to achieve objectives for complex problems in large systems, such as the Everglades or the Colorado River. I contend that adaptive management is the best approach available to agencies for addressing this type of complex problem, although its success has been limited thus far. Traditional decision-making approaches have been fairly successful at addressing relatively straightforward problems in small, replicated systems, such as management of trout in small streams or pulp production in forests. However, this success may be jeopardized as more users place increasing demands on these systems. Adaptive management has received little attention from agencies for addressing problems in small-scale systems, but I suggest that it may be a useful approach for creating a holistic view of common problems and developing guidelines that can then be used in simpler, more traditional approaches to management. Although adaptive management may be more expensive to initiate than traditional approaches, it may be less expensive in the long run if it leads to more effective management. The overall goal of adaptive management is not to maintain an optimal condition of the resource, but to develop an optimal management capacity. This is accomplished by maintaining ecological resilience that allows the system to react to inevitable stresses, and generating flexibility in institutions and stakeholders that allows managers to react when conditions change. The result is that, rather than managing for a single, optimal state, we manage within a range of acceptable outcomes while avoiding catastrophes and irreversible negative effects. Copyright ?? 1999 by The Resilience Alliance.

  18. Universal digital high-resolution melt: a novel approach to broad-based profiling of heterogeneous biological samples.

    PubMed

    Fraley, Stephanie I; Hardick, Justin; Masek, Billie J; Jo Masek, Billie; Athamanolap, Pornpat; Rothman, Richard E; Gaydos, Charlotte A; Carroll, Karen C; Wakefield, Teresa; Wang, Tza-Huei; Yang, Samuel

    2013-10-01

    Comprehensive profiling of nucleic acids in genetically heterogeneous samples is important for clinical and basic research applications. Universal digital high-resolution melt (U-dHRM) is a new approach to broad-based PCR diagnostics and profiling technologies that can overcome issues of poor sensitivity due to contaminating nucleic acids and poor specificity due to primer or probe hybridization inaccuracies for single nucleotide variations. The U-dHRM approach uses broad-based primers or ligated adapter sequences to universally amplify all nucleic acid molecules in a heterogeneous sample, which have been partitioned, as in digital PCR. Extensive assay optimization enables direct sequence identification by algorithm-based matching of melt curve shape and Tm to a database of known sequence-specific melt curves. We show that single-molecule detection and single nucleotide sensitivity is possible. The feasibility and utility of U-dHRM is demonstrated through detection of bacteria associated with polymicrobial blood infection and microRNAs (miRNAs) associated with host response to infection. U-dHRM using broad-based 16S rRNA gene primers demonstrates universal single cell detection of bacterial pathogens, even in the presence of larger amounts of contaminating bacteria; U-dHRM using universally adapted Lethal-7 miRNAs in a heterogeneous mixture showcases the single copy sensitivity and single nucleotide specificity of this approach.

  19. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  20. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    PubMed Central

    Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele

    2014-01-01

    The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate. PMID:25313493

  1. Hyperspectral Remote Sensing of the Coastal Ocean: Adaptive Sampling and Forecasting of In situ Optical Properties

    DTIC Science & Technology

    2002-09-30

    integrated observation system that is being coupled to a data assimilative hydrodynamic bio-optical ecosystem model. The system was used adaptively to develop hyperspectral remote sensing techniques in optically complex nearshore coastal waters.

  2. An adaptive online learning approach for Support Vector Regression: Online-SVR-FID

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zio, Enrico

    2016-08-01

    Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.

  3. Station-keeping control for a stratospheric airship platform via fuzzy adaptive backstepping approach

    NASA Astrophysics Data System (ADS)

    Yang, Yueneng; Wu, Jie; Zheng, Wei

    2013-04-01

    This paper presents a novel approach for station-keeping control of a stratospheric airship platform in the presence of parametric uncertainty and external disturbance. First, conceptual design of the stratospheric airship platform is introduced, including the target mission, configuration, energy sources, propeller and payload. Second, the dynamics model of the airship platform is presented, and the mathematical model of its horizontal motion is derived. Third, a fuzzy adaptive backstepping control approach is proposed to develop the station-keeping control system for the simplified horizontal motion. The backstepping controller is designed assuming that the airship model is accurately known, and a fuzzy adaptive algorithm is used to approximate the uncertainty of the airship model. The stability of the closed-loop control system is proven via the Lyapunov theorem. Finally, simulation results illustrate the effectiveness and robustness of the proposed control approach.

  4. Adapting Evidence-based Mental Health Treatments in Community Settings: Preliminary Results from a Partnership Approach

    PubMed Central

    Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B.

    2009-01-01

    This paper describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for using partnerships as a basis for conducting mental health treatment research is presented. Finally, an ongoing partnership research project concerned with the adaptation of evidence-based mental health treatments for childhood internalizing problems in community settings is presented, with preliminary results of the ongoing effort discussed. PMID:18697917

  5. A New Multi-Agent Approach to Adaptive E-Education

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Cheng, Peng

    Improving customer satisfaction degree is important in e-Education. This paper describes a new approach to adaptive e-Education taking into account the full spectrum of Web service techniques and activities. It presents a multi-agents architecture based on artificial psychology techniques, which makes the e-Education process both adaptable and dynamic, and hence up-to-date. Knowledge base techniques are used to support the e-Education process, and artificial psychology techniques to deal with user psychology, which makes the e-Education system more effective and satisfying.

  6. A new approach for designing self-organizing systems and application to adaptive control

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Zhang, Shi; Lin, Yueqing; Huang, Song

    1993-01-01

    There is tremendous interest in the design of intelligent machines capable of autonomous learning and skillful performance under complex environments. A major task in designing such systems is to make the system plastic and adaptive when presented with new and useful information and stable in response to irrelevant events. A great body of knowledge, based on neuro-physiological concepts, has evolved as a possible solution to this problem. Adaptive resonance theory (ART) is a classical example under this category. The system dynamics of an ART network is described by a set of differential equations with nonlinear functions. An approach for designing self-organizing networks characterized by nonlinear differential equations is proposed.

  7. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC.

  8. Adaptation and psychometric properties of the student career construction inventory for a Portuguese sample: formative and reflective constructs.

    PubMed

    Rocha, Magda; Guimarães, Maria Isabel

    2012-12-01

    The adaptation of the student career construction inventory was carried out with a Portuguese sample of 356 first-year economics, management, psychology, nursing, nutrition sciences, bio-engineering, and biosciences students (244 women, 112 men; M age = 19.4, SD = 4.4) in the Catholic University of Portugal, Porto. Confirmatory factorial analysis supported the prior structure of the reflective models, with acceptable fit indexes. Internal consistency coefficients for the scales were poor to acceptable (.51 to .89). The formative nature of career adaptability was supported in a complex model identified by structural relations for which the fit indexes were weak but acceptable for a preliminary study.

  9. Simulation model based approach for long exposure atmospheric point spread function reconstruction for laser guide star multiconjugate adaptive optics.

    PubMed

    Gilles, Luc; Correia, Carlos; Véran, Jean-Pierre; Wang, Lianqi; Ellerbroek, Brent

    2012-11-01

    This paper discusses an innovative simulation model based approach for long exposure atmospheric point spread function (PSF) reconstruction in the context of laser guide star (LGS) multiconjugate adaptive optics (MCAO). The approach is inspired from the classical scheme developed by Véran et al. [J. Opt. Soc. Am. A14, 3057 (1997)] and Flicker et al. [Astron. Astrophys.400, 1199 (2003)] and reconstructs the long exposure optical transfer function (OTF), i.e., the Fourier transformed PSF, as a product of separate long-exposure tip/tilt removed and tip/tilt OTFs, each estimated by postprocessing system and simulation telemetry data. Sample enclosed energy results assessing reconstruction accuracy are presented for the Thirty Meter Telescope LGS MCAO system currently under design and show that percent level absolute and differential photometry over a 30 arcsec diameter field of view are achievable provided the simulation model faithfully represents the real system.

  10. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  11. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim.

  12. Synchronization of Coupled Reaction-Diffusion Neural Networks With Directed Topology via an Adaptive Approach.

    PubMed

    Zhang, Hao; Sheng, Yin; Zeng, Zhigang

    2017-03-15

    This paper investigates the synchronization issue of coupled reaction-diffusion neural networks with directed topology via an adaptive approach. Due to the complexity of the network structure and the presence of space variables, it is difficult to design proper adaptive strategies on coupling weights to accomplish the synchronous goal. Under the assumptions of two kinds of special network structures, that is, directed spanning path and directed spanning tree, some novel edge-based adaptive laws, which utilized the local information of node dynamics fully are designed on the coupling weights for reaching synchronization. By constructing appropriate energy function, and utilizing some analytical techniques, several sufficient conditions are given. Finally, some simulation examples are given to verify the effectiveness of the obtained theoretical results.

  13. Teacher and Student-Focused Approaches: Influence of Learning Approach and Self-Efficacy in a Psychology Postgraduate Sample

    ERIC Educational Resources Information Center

    Kaye, Linda K.; Brewer, Gayle

    2013-01-01

    The current study examined approaches to teaching in a postgraduate psychology sample. This included considering teaching-focused (information transfer) and student-focused (conceptual changes in understanding) approaches to teaching. Postgraduate teachers of psychology (N = 113) completed a questionnaire measuring their use of a teacher- or…

  14. RECRUITING FOR A LONGITUDINAL STUDY OF CHILDREN'S HEALTH USING A HOUSEHOLD-BASED PROBABILITY SAMPLING APPROACH

    EPA Science Inventory

    The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...

  15. Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs in Evaluations

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2013-01-01

    The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…

  16. An implicit and adaptive nonlinear frequency domain approach for periodic viscous flows

    NASA Astrophysics Data System (ADS)

    Mosahebi, A.; Nadarajah, S.

    2014-12-01

    An implicit nonlinear Lower-Upper symmetric Gauss-Seidel (LU-SGS) solver has been extended to the adaptive Nonlinear Frequency Domain method (adaptive NLFD) for periodic viscous flows. The discretized equations are linearized in both spatial and temporal directions, yielding an innovative segregate approach, where the effects of the neighboring cells are transferred to the right-hand-side and are updated iteratively. This property of the solver is aligned with the adaptive NLFD concept, in which different cells have different number of modes; hence, should be treated individually. The segregate analysis of the modal equations prevents assembling and inversion of a large left-hand-side matrix, when high number of modes are involved. This is an important characteristic for a selected flow solver of the adaptive NLFD method, where a high modal content may be required in highly unsteady parts of the flow field. The implicit nonlinear LU-SGS solver has demonstrated to be both robust and computationally efficient as the number of modes is increased. The developed solver is thoroughly validated for the laminar vortex shedding behind a stationary cylinder, high angle of attack NACA0012 airfoil, and a plunging NACA0012 airfoil. An order of magnitude improvement in the computational time is observed through the developed implicit approach over the classical modified 5-stage Runge-Kutta method.

  17. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  18. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  19. An Adaptive Particle Filtering Approach to Tracking Modes in a Varying Shallow Ocean Environment

    SciTech Connect

    Candy, J V

    2011-03-22

    The shallow ocean environment is ever changing mostly due to temperature variations in its upper layers (< 100m) directly affecting sound propagation throughout. The need to develop processors that are capable of tracking these changes implies a stochastic as well as an 'adaptive' design. The stochastic requirement follows directly from the multitude of variations created by uncertain parameters and noise. Some work has been accomplished in this area, but the stochastic nature was constrained to Gaussian uncertainties. It has been clear for a long time that this constraint was not particularly realistic leading a Bayesian approach that enables the representation of any uncertainty distribution. Sequential Bayesian techniques enable a class of processors capable of performing in an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean. In this paper adaptive processors providing enhanced signals for acoustic hydrophonemeasurements on a vertical array as well as enhanced modal function estimates are developed. Synthetic data is provided to demonstrate that this approach is viable.

  20. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  1. A Three-Step Approach with Adaptive Additive Magnitude Selection for the Sharpening of Images

    PubMed Central

    Lee, Tien-Lin

    2014-01-01

    Aimed to find the additive magnitude automatically and adaptively, we propose a three-step and model-based approach for the sharpening of images in this paper. In the first pass, a Grey prediction model is applied to find a global maximal additive magnitude so that the condition of oversharpening in images to be sharpened can be avoided. During the second pass, edge pixels are picked out with our previously proposed edge detection mechanism. In this pass, a low-pass filter is also applied so that isolated pixels will not be regarded as around an edge. In the final pass, those pixels detected as around an edge are adjusted adaptively based on the local statistics, and those nonedge pixels are kept unaltered. Extensive experiments on natural images as well as medical images with subjective and objective evaluations will be given to demonstrate the usefulness of the proposed approach. PMID:25309951

  2. Towards a System Level Understanding of Non-Model Organisms Sampled from the Environment: A Network Biology Approach

    PubMed Central

    Williams, Tim D.; Turan, Nil; Diab, Amer M.; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L.; Hrydziuszko, Olga; Lyons, Brett P.; Stentiford, Grant D.; Herbert, John M.; Abraham, Joseph K.; Katsiadaki, Ioanna; Leaver, Michael J.; Taggart, John B.; George, Stephen G.; Viant, Mark R.; Chipman, Kevin J.; Falciani, Francesco

    2011-01-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations. PMID:21901081

  3. Towards a system level understanding of non-model organisms sampled from the environment: a network biology approach.

    PubMed

    Williams, Tim D; Turan, Nil; Diab, Amer M; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L; Hrydziuszko, Olga; Lyons, Brett P; Stentiford, Grant D; Herbert, John M; Abraham, Joseph K; Katsiadaki, Ioanna; Leaver, Michael J; Taggart, John B; George, Stephen G; Viant, Mark R; Chipman, Kevin J; Falciani, Francesco

    2011-08-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.

  4. A new approach to importance sampling for the simulation of false alarms. [in radar systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1987-01-01

    In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.

  5. Psychometric Properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD Sample

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Miller, Mark W.

    2011-01-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the…

  6. An Investigation of Adaptive Signal Processing Approaches to Active Combustion Control

    DTIC Science & Technology

    2001-06-01

    stabilizing control using an adaptive feedback architecture. As discussed by Annaswamy et al. (1998), previous researchers have not been able to...accurately represents the dynamics of the limit cycling system and can ultimately be used for stabilizing control . System Identification The approach to...achieve stabilizing control . The first is easily identifiable as a feedback loop instability (see Equation 4), whereas the second is less well-defined as a

  7. Performance Monitoring and Assessment of Neuro-Adaptive Controllers for Aerospace Applications Using a Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Guenther, Kurt; Hodgkinson, John; Jacklin, Stephen; Richard, Michael; Schumann, Johann; Soares, Fola

    2005-01-01

    Modern exploration missions require modern control systems-control systems that can handle catastrophic changes in the system's behavior, compensate for slow deterioration in sustained operations, and support fast system ID. Adaptive controllers, based upon Neural Networks have these capabilities, but they can only be used safely if proper verification & validation (V&V) can be done. In this paper we present our V & V approach and simulation result within NASA's Intelligent Flight Control Systems (IFCS).

  8. Adaptation of ATI-R Scale to Turkish Samples: Validity and Reliability Analyses

    ERIC Educational Resources Information Center

    Tezci, Erdogan

    2017-01-01

    Teachers' teaching approaches have become an important issue in the search of quality in education and teaching because of their effect on students' learning. Improvements in teachers' knowledge and awareness of their own teaching approaches enable them to adopt teaching process in accordance with their students' learning styles. The Approaches to…

  9. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  10. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, Edward F.; Kennedy, Gregory W.; Craig, Jaquelyn; Boase, James; Soper, Karen

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8 m) Detroit River using an anchor and buoy system.

  11. An adaptive locally linear embedding manifold learning approach for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.

    2015-05-01

    Algorithms for spectral analysis commonly use parametric or linear models of the data. Research has shown, however, that hyperspectral data -- particularly in materially cluttered scenes -- are not always well-modeled by statistical or linear methods. Here, we propose an approach to hyperspectral target detection that is based on a graph theory model of the data and a manifold learning transformation. An adaptive nearest neighbor (ANN) graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation. The artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space using Spectral Angle Mapper. This methodology is an improvement over previous iterations of this approach due to the incorporation of ANN, the artificial target manifold, and the choice of detector in the transformed space. We implement our approach in a spatially local way: the image is delineated into square tiles, and the detection maps are normalized across the entire image. Target detection results will be shown using laboratory-measured and scene-derived target data from the SHARE 2012 collect.

  12. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  13. Reducing False Negative Reads in RFID Data Streams Using an Adaptive Sliding-Window Approach

    PubMed Central

    Massawe, Libe Valentine; Kinyua, Johnson D. M.; Vermaak, Herman

    2012-01-01

    Unreliability of the data streams generated by RFID readers is among the primary factors which limit the widespread adoption of the RFID technology. RFID data cleaning is, therefore, an essential task in the RFID middleware systems in order to reduce reading errors, and to allow these data streams to be used to make a correct interpretation and analysis of the physical world they are representing. In this paper we propose an adaptive sliding-window based approach called WSTD which is capable of efficiently coping with both environmental variation and tag dynamics. Our experimental results demonstrate the efficacy of the proposed approach. PMID:22666027

  14. Adaptive quasi-Newton algorithm for source extraction via CCA approach.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian; Feng, Da-Zheng

    2014-04-01

    This paper addresses the problem of adaptive source extraction via the canonical correlation analysis (CCA) approach. Based on Liu's analysis of CCA approach, we propose a new criterion for source extraction, which is proved to be equivalent to the CCA criterion. Then, a fast and efficient online algorithm using quasi-Newton iteration is developed. The stability of the algorithm is also analyzed using Lyapunov's method, which shows that the proposed algorithm asymptotically converges to the global minimum of the criterion. Simulation results are presented to prove our theoretical analysis and demonstrate the merits of the proposed algorithm in terms of convergence speed and successful rate for source extraction.

  15. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-18

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error.

  16. A New Approach to Interference Excision in Radio Astronomy: Real-Time Adaptive Cancellation

    NASA Astrophysics Data System (ADS)

    Barnbaum, Cecilia; Bradley, Richard F.

    1998-11-01

    Every year, an increasing amount of radio-frequency (RF) spectrum in the VHF, UHF, and microwave bands is being utilized to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Such services already cause problems for radio astronomy even in very remote observing sites, and the potential for this form of light pollution to grow is alarming. Preventive measures to eliminate interference through FCC legislation and ITU agreements can be effective; however, many times this approach is inadequate and interference excision at the receiver is necessary. Conventional techniques such as RF filters, RF shielding, and postprocessing of data have been only somewhat successful, but none has been sufficient. Adaptive interference cancellation is a real-time approach to interference excision that has not been used before in radio astronomy. We describe here, for the first time, adaptive interference cancellation in the context of radio astronomy instrumentation, and we present initial results for our prototype receiver. In the 1960s, analog adaptive interference cancelers were developed that obtain a high degree of cancellation in problems of radio communications and radar. However, analog systems lack the dynamic range, noised performance, and versatility required by radio astronomy. The concept of digital adaptive interference cancellation was introduced in the mid-1960s as a way to reduce unwanted noise in low-frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartments of automobiles. These audio-frequency applications require bandwidths of only a few tens of kilohertz. Only recently has high-speed digital filter technology made high dynamic range adaptive canceling possible in a bandwidth as large as a few megahertz, finally opening the door to application in radio astronomy. We have

  17. An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Malano, H. M.; Davidson, B.; George, B.

    2014-12-01

    Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture

  18. A problem-oriented approach to understanding adaptation: lessons learnt from Alpine Shire, Victoria Australia.

    NASA Astrophysics Data System (ADS)

    Roman, Carolina

    2010-05-01

    Climate change is gaining attention as a significant strategic issue for localities that rely on their business sectors for economic viability. For businesses in the tourism sector, considerable research effort has sought to characterise the vulnerability to the likely impacts of future climate change through scenarios or ‘end-point' approaches (Kelly & Adger, 2000). Whilst useful, there are few demonstrable case studies that complement such work with a ‘start-point' approach that seeks to explore contextual vulnerability (O'Brien et al., 2007). This broader approach is inclusive of climate change as a process operating within a biophysical system and allows recognition of the complex interactions that occur in the coupled human-environmental system. A problem-oriented and interdisciplinary approach was employed at Alpine Shire, in northeast Victoria Australia, to explore the concept of contextual vulnerability and adaptability to stressors that include, but are not limited to climatic change. Using a policy sciences approach, the objective was to identify factors that influence existing vulnerabilities and that might consequently act as barriers to effective adaptation for the Shire's business community involved in the tourism sector. Analyses of results suggest that many threats, including the effects climate change, compete for the resources, strategy and direction of local tourism management bodies. Further analysis of conditioning factors revealed that many complex and interacting factors define the vulnerability and adaptive capacity of the Shire's tourism sector to the challenges of global change, which collectively have more immediate implications for policy and planning than long-term future climate change scenarios. An approximation of the common interest, i.e. enhancing capacity in business acumen amongst tourism operators, would facilitate adaptability and sustainability through the enhancement of social capital in this business community. Kelly, P

  19. Hyperspectral Remote Sensing of the Coastal Ocean: Adaptive Sampling and Forecasting of In situ Optical Properties

    DTIC Science & Technology

    2003-09-30

    We are developing an integrated rapid environmental assessment capability that will be used to feed an ocean nowcast/forecast system. The goal is to develop a capacity for predicting the dynamics in inherent optical properties in coastal waters. This is being accomplished by developing an integrated observation system that is being coupled to a data assimilative hydrodynamic bio-optical ecosystem model. The system was used adaptively to calibrate hyperspectral remote sensing sensors in optically complex nearshore coastal waters.

  20. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.

  1. An effective approach for obtaining optimal sampling windows for population pharmacokinetic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2009-01-01

    This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.

  2. An algorithmic approach to adaptive state filtering using recurrent neural networks.

    PubMed

    Parlos, A G; Menon, S K; Atiya, A

    2001-01-01

    Practical algorithms are presented for adaptive state filtering in nonlinear dynamic systems when the state equations are unknown. The state equations are constructively approximated using neural networks. The algorithms presented are based on the two-step prediction-update approach of the Kalman filter. The proposed algorithms make minimal assumptions regarding the underlying nonlinear dynamics and their noise statistics. Non-adaptive and adaptive state filtering algorithms are presented with both off-line and online learning stages. The algorithms are implemented using feedforward and recurrent neural network and comparisons are presented. Furthermore, extended Kalman filters (EKFs) are developed and compared to the filter algorithms proposed. For one of the case studies, the EKF converges but results in higher state estimation errors that the equivalent neural filters. For another, more complex case study with unknown system dynamics and noise statistics, the developed EKFs do not converge. The off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. Online training further enhances the estimation accuracy of the developed adaptive filters, effectively decoupling the eventual filter accuracy from the accuracy of the process model.

  3. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  4. Ebola Virus Altered Innate and Adaptive Immune Response Signalling Pathways: Implications for Novel Therapeutic Approaches.

    PubMed

    Kumar, Anoop

    2016-01-01

    Ebola virus (EBOV) arise attention for their impressive lethality by the poor immune response and high inflammatory reaction in the patients. It causes a severe hemorrhagic fever with case fatality rates of up to 90%. The mechanism underlying this lethal outcome is poorly understood. In 2014, a major outbreak of Ebola virus spread amongst several African countries, including Leone, Sierra, and Guinea. Although infections only occur frequently in Central Africa, but the virus has the potential to spread globally. Presently, there is no vaccine or treatment is available to counteract Ebola virus infections due to poor understanding of its interaction with the immune system. Accumulating evidence indicates that the virus actively alters both innate and adaptive immune responses and triggers harmful inflammatory responses. In the literature, some reports have shown that alteration of immune signaling pathways could be due to the ability of EBOV to interfere with dendritic cells (DCs), which link innate and adaptive immune responses. On the other hand, some reports have demonstrated that EBOV, VP35 proteins act as interferon antagonists. So, how the Ebola virus altered the innate and adaptive immune response signaling pathways is still an open question for the researcher to be explored. Thus, in this review, I try to summarize the mechanisms of the alteration of innate and adaptive immune response signaling pathways by Ebola virus which will be helpful for designing effective drugs or vaccines against this lethal infection. Further, potential targets, current treatment and novel therapeutic approaches have also been discussed.

  5. An evaluation of temporally adaptive transformation approaches for solving Richards' equation

    NASA Astrophysics Data System (ADS)

    Williams, Glenn A.; Miller, Cass T.

    Developing robust and efficient numerical solution methods for Richards' equation (RE) continues to be a challenge for certain problems. We consider such a problem here: infiltration into unsaturated porous media initially at static conditions for uniform and non-uniform pore size media. For ponded boundary conditions, a sharp infiltration front results, which propagates through the media. We evaluate the resultant solution method for robustness and efficiency using combinations of variable transformation and adaptive time-stepping methods. Transformation methods introduce a change of variable that results in a smoother solution, which is more amenable to efficient numerical solution. We use adaptive time-stepping methods to adjust the time-step size, and in some cases the order of the solution method, to meet a constraint on nonlinear solution convergence properties or a solution error criterion. Results for three test problems showed that adaptive time-stepping methods provided robust solutions; in most cases transforming the dependent variable led to more efficient solutions than untransformed approaches, especially as the pore-size uniformity increased; and the higher-order adaptive time integration method was robust and the most efficient method evaluated.

  6. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  7. Rethinking growth and decay kinetics in activated sludge - towards a new adaptive kinetics approach.

    PubMed

    Friedrich, Michael; Jimenez, Jose; Pruden, Amy; Miller, Jennifer H; Metch, Jacob; Takács, Imre

    2017-02-01

    Growth kinetics in activated sludge modelling (ASM) are typically assumed to be the result of intrinsic growth and decay properties and thus process parameters are deemed to be constant. The activity change in a microbial population is expressed in terms of variance of the active biomass fraction and not actual shifts in bacterial cellular activities. This approach is limited, in that it does not recognise the reality that active biomass is highly physiologically adaptive. Here, a strong correlation between maximum specific growth rate (μmax) and decay rate (be) of ordinary heterotrophic organisms was revealed in both low solids retention times (SRT) and high SRT activated sludge systems. This relationship is indicative of physiological adaptation either for growth (high μmax and be) or survival optimization (low μmax and be). Further, the nitrifier decay process was investigated using molecular techniques to measure decay rates of ammonia oxidizing bacteria and nitrite oxidizing bacteria over a range of temperatures. This approach revealed decay rates 10-12% lower than values previously accepted and used in ASM. These findings highlight potential benefits of incorporating physiological adaptation of heterotrophic and nitrifying populations in future ASM.

  8. Overcoming the Curse of Dimension: Methods Based on Sparse Representation and Adaptive Sampling

    DTIC Science & Technology

    2011-02-28

    carried out mainly by him, together with our joint post-doc Haijun Yu. Please refer to his report for the progress made in this direction. 3 Exploring...multiscale modeling using sparse representation”, Comm. Comp. Phys., 4(5), pp. 1025–1033 (2008). [3] X. Zhou and W. Ren and W. E, “Adaptive minimum...action method for the study of rare events”, J. Chem. Phys., 128, 10, 2008. [4] X. Wan, X. Zhou and W. E, “Noise-induced transitions in the Kuramoto-Sivashinsky equation”, preprint, submitted. 4

  9. A Generalized Approach to the Two Sample Problem: The Quantile Approach.

    DTIC Science & Technology

    1981-04-01

    Tests for the Two Sample Problem and Their Power," I, II, III, Indagationes Math., 14, 453-458, 15, 303-310, 15, 80. Wald , A. and Wolfowitz , J. (1940...where 0 < p < q < 1 or use p,q an inner product based on the censored observations. Other directions to go include the Wald andWolfowitz (1940) runs

  10. Shape adaptation of long bone structures using a contour based approach.

    PubMed

    Roberts, M D; Hart, R T

    2005-06-01

    In this work, an approach for mechanically driven shape adaptation of long bone structures is presented which utilizes contour descriptions to track morphological changes at different bone cross sections. A script-based procedure is used to iteratively generate a solid geometry and finite element (FE) model from these contours, perform a stress analysis, and then update the contour shapes using the results of the stress analysis using a prescribed remodeling rule. Because a remeshing operation is performed at each timestep the method is able to effectively simulate large changes in geometry. Several examples of shape adaptation of idealized and geometrically accurate long-bone structures are presented using a variety of remodeling signals and parameters.

  11. Systematic analysis of the kalimantacin assembly line NRPS module using an adapted targeted mutagenesis approach.

    PubMed

    Uytterhoeven, Birgit; Appermans, Kenny; Song, Lijiang; Masschelein, Joleen; Lathouwers, Thomas; Michiels, Chris W; Lavigne, Rob

    2016-04-01

    Kalimantacin is an antimicrobial compound with strong antistaphylococcal activity that is produced by a hybrid trans-acyltransferase polyketide synthase/nonribosomal peptide synthetase system in Pseudomonas fluorescens BCCM_ID9359. We here present a systematic analysis of the substrate specificity of the glycine-incorporating adenylation domain from the kalimantacin biosynthetic assembly line by a targeted mutagenesis approach. The specificity-conferring code was adapted for use in Pseudomonas and mutated adenylation domain active site sequences were introduced in the kalimantacin gene cluster, using a newly adapted ligation independent cloning method. Antimicrobial activity screens and LC-MS analyses revealed that the production of the kalimantacin analogues in the mutated strains was abolished. These results support the idea that further insight in the specificity of downstream domains in nonribosomal peptide synthetases and polyketide synthases is required to efficiently engineer these strains in vivo.

  12. An adaptive singular spectrum analysis approach to murmur detection from heart sounds.

    PubMed

    Sanei, Saeid; Ghodsi, Mansoureh; Hassani, Hossein

    2011-04-01

    Murmur is the result of various heart abnormalities. A new robust approach for separation of murmur from heart sound has been suggested in this article. Singular spectrum analysis (SSA) has been adapted to the changes in the statistical properties of the data and effectively used for detection of murmur from single-channel heart sound (HS) signals. Incorporating a cleverly selected a priori within the SSA reconstruction process, results in an accurate separation of normal HS from the murmur segment. Another contribution of this work is selection of the correct subspace of the desired signal component automatically. In addition, the subspace size can be identified iteratively. A number of HS signals with murmur have been processed using the proposed adaptive SSA (ASSA) technique and the results have been quantified both objectively and subjectively.

  13. Adaptive eLearning modules for cytopathology education: A review and approach.

    PubMed

    Samulski, T Danielle; La, Teresa; Wu, Roseann I

    2016-11-01

    Clinical training imposes time and resource constraints on educators and learners, making it difficult to provide and absorb meaningful instruction. Additionally, innovative and personalized education has become an expectation of adult learners. Fortunately, the development of web-based educational tools provides a possible solution to these challenges. Within this review, we introduce the utility of adaptive eLearning platforms in pathology education. In addition to a review of the current literature, we provide the reader with a suggested approach for module creation, as well as a critical assessment of an available platform, based on our experience in creating adaptive eLearning modules for teaching basic concepts in gynecologic cytopathology. Diagn. Cytopathol. 2016;44:944-951. © 2016 Wiley Periodicals, Inc.

  14. Mobile membrane introduction tandem mass spectrometry for on-the-fly measurements and adaptive sampling of VOCs around oil and gas projects in Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.

    2012-12-01

    The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols

  15. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  16. Random Transect with Adaptive Clustering Sampling Design - ArcPad Applet Manual

    DTIC Science & Technology

    2011-09-01

    sampling design geodatabase ................................. 6 3.1.2 Create features in geodatabase ...developed for ArcPad®, a mobile geographical information software (GIS) for field applications developed by ESRI ® of Redlands, CA. ArcPad is designed to...occurrence maps (Rew et al. 2005) to guide future surveying and management efforts. The RTAC combines features of the two NIS sampling designs described

  17. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.

    PubMed

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-10-26

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange.

  18. Self-Learning Adaptive Umbrella Sampling Method for the Determination of Free Energy Landscapes in Multiple Dimensions.

    PubMed

    Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon

    2013-04-09

    The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy.

  19. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the NASA Gulfstream GIII test bed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight.

  20. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  1. Stable Direct Adaptive Control of Linear Infinite-dimensional Systems Using a Command Generator Tracker Approach

    NASA Technical Reports Server (NTRS)

    Balas, M. J.; Kaufman, H.; Wen, J.

    1985-01-01

    A command generator tracker approach to model following contol of linear distributed parameter systems (DPS) whose dynamics are described on infinite dimensional Hilbert spaces is presented. This method generates finite dimensional controllers capable of exponentially stable tracking of the reference trajectories when certain ideal trajectories are known to exist for the open loop DPS; we present conditions for the existence of these ideal trajectories. An adaptive version of this type of controller is also presented and shown to achieve (in some cases, asymptotically) stable finite dimensional control of the infinite dimensional DPS.

  2. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  3. Acquiring Peak Samples from Phytoplankton Thin Layers and Intermediate Nepheloid Layers by an Autonomous Underwater Vehicle with Adaptive Triggering

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; McEwen, R.; Ryan, J. P.; Bellingham, J. G.; Harvey, J.; Vrijenhoek, R.

    2010-12-01

    Phytoplankton thin layers (PTLs) affect many fundamental aspects of coastal ocean ecology including primary productivity, development of harmful algal blooms (HABs) and the survival and growth of zooplankton and fish larvae. Intermediate nepheloid layers (INLs) that contain suspended particulate matter transported from the bottom boundary layer of continental shelves and slopes also affect biogeochemistry and ecology of ocean margins. To better understand the impacts of these types of layers, we have developed an adaptive sampling method for an autonomous underwater vehicle (AUV) to detect a layer (adjusting detection parameters in situ), acquire water samples from peaks in the layer, and acquire control samples outside the layer. We have used the method in a number of field experiments with the AUV Dorado, which is equipped with ten water samplers (called "gulpers"). In real time, the algorithm tracks background levels of fluorescence and optical backscatter and the peaks' baseline to ensure that detection is tuned to the ambient conditions. The algorithm cross-checks fluorescence and backscatter signals to differentiate PTLs from INLs. To capture peak water samples with minimal delay, the algorithm exploits the AUV's sawtooth (i.e., yo-yo) trajectory: the vehicle crosses the detected layer twice in one yo-yo cycle. At the first crossing, it detects the layer's peak and saves its signal height. Sampling is triggered at the second crossing when the signal reaches the saved peak height plus meeting additional timing and depth conditions. The algorithm is also capable of triggering gulpers to acquire control samples outside the layer for comparison with ambient water. The sequence of peak and control samples can be set based on need. In recent AUV Dorado missions, the algorithm triggered the gulpers to acquire peak and control samples from INLs and PTLs in Monterey Bay. Zooplankton analysis of some peak samples showed very high concentrations of mussel and barnacle

  4. Sample preparation with solid phase microextraction and exhaustive extraction approaches: Comparison for challenging cases.

    PubMed

    Boyacı, Ezel; Rodríguez-Lafuente, Ángel; Gorynski, Krzysztof; Mirnaghi, Fatemeh; Souza-Silva, Érica A; Hein, Dietmar; Pawliszyn, Janusz

    2015-05-11

    In chemical analysis, sample preparation is frequently considered the bottleneck of the entire analytical method. The success of the final method strongly depends on understanding the entire process of analysis of a particular type of analyte in a sample, namely: the physicochemical properties of the analytes (solubility, volatility, polarity etc.), the environmental conditions, and the matrix components of the sample. Various sample preparation strategies have been developed based on exhaustive or non-exhaustive extraction of analytes from matrices. Undoubtedly, amongst all sample preparation approaches, liquid extraction, including liquid-liquid (LLE) and solid phase extraction (SPE), are the most well-known, widely used, and commonly accepted methods by many international organizations and accredited laboratories. Both methods are well documented and there are many well defined procedures, which make them, at first sight, the methods of choice. However, many challenging tasks, such as complex matrix applications, on-site and in vivo applications, and determination of matrix-bound and free concentrations of analytes, are not easily attainable with these classical approaches for sample preparation. In the last two decades, the introduction of solid phase microextraction (SPME) has brought significant progress in the sample preparation area by facilitating on-site and in vivo applications, time weighted average (TWA) and instantaneous concentration determinations. Recently introduced matrix compatible coatings for SPME facilitate direct extraction from complex matrices and fill the gap in direct sampling from challenging matrices. Following introduction of SPME, numerous other microextraction approaches evolved to address limitations of the above mentioned techniques. There is not a single method that can be considered as a universal solution for sample preparation. This review aims to show the main advantages and limitations of the above mentioned sample

  5. Reliability and Validity of the Spanish Adaptation of EOSS, Comparing Normal and Clinical Samples

    ERIC Educational Resources Information Center

    Valero-Aguayo, Luis; Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; de Huralde, Ma. Angeles Selva-Lopez

    2012-01-01

    The Experiencing of Self Scale (EOSS) was created for the evaluation of Functional Analytic Psychotherapy (Kohlenberg & Tsai, 1991, 2001, 2008) in relation to the concept of the experience of personal self as socially and verbally constructed. This paper presents a reliability and validity study of the EOSS with a Spanish sample (582…

  6. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  7. Developing integrated approaches to climate change adaptation in rural communities of the Peruvian Andes

    NASA Astrophysics Data System (ADS)

    Huggel, Christian

    2010-05-01

    Over centuries, Andean communities have developed strategies to cope with climate variability and extremes, such as cold waves or droughts, which can have severe impacts on their welfare. Nevertheless, the rural population, living at altitudes of 3000 to 4000 m asl or even higher, remains highly vulnerable to external stresses, partly because of the extreme living conditions, partly as a consequence of high poverty. Moreover, recent studies indicate that climatic extreme events have increased in frequency in the past years. A Peruvian-Swiss Climate Change Adaptation Programme in Peru (PACC) is currently undertaking strong efforts to understand the links between climatic conditions and local livelihood assets. The goal is to propose viable strategies for adaptation in collaboration with the local population and governments. The program considers three main areas of action, i.e. (i) water resource management; (ii) disaster risk reduction; and (iii) food security. The scientific studies carried out within the programme follow a highly transdisciplinary approach, spanning the whole range from natural and social sciences. Moreover, the scientific Peruvian-Swiss collaboration is closely connected to people and institutions operating at the implementation and political level. In this contribution we report on first results of thematic studies, address critical questions, and outline the potential of integrative research for climate change adaptation in mountain regions in the context of a developing country.

  8. Adaptive Filter-bank Approach to Restoration and Spectral Analysis of Gapped Data

    NASA Astrophysics Data System (ADS)

    Stoica, Petre; Larsson, Erik G.; Li, Jian

    2000-10-01

    The main topic of this paper is the nonparametric estimation of complex (both amplitude and phase) spectra from gapped data, as well as the restoration of such data. The focus is on the extension of the APES (amplitude and phase estimation) approach to data sequences with gaps. APES, which is one of the most successful existing nonparametric approaches to the spectral analysis of full data sequences, uses a bank of narrowband adaptive (both frequency and data dependent) filters to estimate the spectrum. A recent interpretation of this approach showed that the filterbank used by APES and the resulting spectrum minimize a least-squares (LS) fitting criterion between the filtered sequence and its spectral decomposition. The extended approach, which is called GAPES for somewhat obvious reasons, capitalizes on the aforementioned interpretation: it minimizes the APES-LS fitting criterion with respect to the missing data as well. This should be a sensible thing to do whenever the full data sequence is stationary, and hence the missing data have the same spectral content as the available data. We use both simulated and real data examples to show that GAPES estimated spectra and interpolated data sequences have excellent accuracy. We also show the performance gain achieved by GAPES over two of the most commonly used approaches for gapped-data spectral analysis, viz., the periodogram and the parametric CLEAN method. This work was partly supported by the Swedish Foundation for Strategic Research.

  9. Social Daydreaming and Adjustment: An Experience-Sampling Study of Socio-Emotional Adaptation During a Life Transition

    PubMed Central

    Poerio, Giulia L.; Totterdell, Peter; Emerson, Lisa-Marie; Miles, Eleanor

    2016-01-01

    Estimates suggest that up to half of waking life is spent daydreaming; that is, engaged in thought that is independent of, and unrelated to, one’s current task. Emerging research indicates that daydreams are predominately social suggesting that daydreams may serve socio-emotional functions. Here we explore the functional role of social daydreaming for socio-emotional adjustment during an important and stressful life transition (the transition to university) using experience-sampling with 103 participants over 28 days. Over time, social daydreams increased in their positive characteristics and positive emotional outcomes; specifically, participants reported that their daydreams made them feel more socially connected and less lonely, and that the content of their daydreams became less fanciful and involved higher quality relationships. These characteristics then predicted less loneliness at the end of the study, which, in turn was associated with greater social adaptation to university. Feelings of connection resulting from social daydreams were also associated with less emotional inertia in participants who reported being less socially adapted to university. Findings indicate that social daydreaming is functional for promoting socio-emotional adjustment to an important life event. We highlight the need to consider the social content of stimulus-independent cognitions, their characteristics, and patterns of change, to specify how social thoughts enable socio-emotional adaptation. PMID:26834685

  10. Where do adaptive shifts occur during invasion A multidisciplinary approach to unravel cold adaptation in a tropical ant species invading the Mediterranean zone

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Although evolution is now recognized as improving the invasive success of populations, where and when key adaptation event(s) occur often remains unclear. Here we used a multidisciplinary approach to disentangle the eco-evolutionary scenario of invasion of a Mediterranean zone (i.e. Israel) by the t...

  11. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    PubMed Central

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes. PMID:25032241

  12. Adaptive modelling of gene regulatory network using Bayesian information criterion-guided sparse regression approach.

    PubMed

    Shi, Ming; Shen, Weiming; Wang, Hong-Qiang; Chong, Yanwen

    2016-12-01

    Inferring gene regulatory networks (GRNs) from microarray expression data are an important but challenging issue in systems biology. In this study, the authors propose a Bayesian information criterion (BIC)-guided sparse regression approach for GRN reconstruction. This approach can adaptively model GRNs by optimising the l1-norm regularisation of sparse regression based on a modified version of BIC. The use of the regularisation strategy ensures the inferred GRNs to be as sparse as natural, while the modified BIC allows incorporating prior knowledge on expression regulation and thus avoids the overestimation of expression regulators as usual. Especially, the proposed method provides a clear interpretation of combinatorial regulations of gene expression by optimally extracting regulation coordination for a given target gene. Experimental results on both simulation data and real-world microarray data demonstrate the competent performance of discovering regulatory relationships in GRN reconstruction.

  13. Adaptive life simulator: A novel approach to modeling the cardiovascular system

    SciTech Connect

    Kangas, L.J.; Keller, P.E.; Hashem, S.

    1995-06-01

    In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  14. Whole genome resequencing of a laboratory-adapted Drosophila melanogaster population sample.

    PubMed

    Gilks, William P; Pennell, Tanya M; Flis, Ilona; Webster, Matthew T; Morrow, Edward H

    2016-01-01

    As part of a study into the molecular genetics of sexually dimorphic complex traits, we used high-throughput sequencing to obtain data on genomic variation in an outbred laboratory-adapted fruit fly ( Drosophila melanogaster) population. We successfully resequenced the whole genome of 220 hemiclonal females that were heterozygous for the same Berkeley reference line genome (BDGP6/dm6), and a unique haplotype from the outbred base population (LH M). The use of a static and known genetic background enabled us to obtain sequences from whole-genome phased haplotypes. We used a BWA-Picard-GATK pipeline for mapping sequence reads to the dm6 reference genome assembly, at a median depth-of coverage of 31X, and have made the resulting data publicly-available in the NCBI Short Read Archive (Accession number SRP058502). We used Haplotype Caller to discover and genotype 1,726,931 small genomic variants (SNPs and indels, <200bp). Additionally we detected and genotyped 167 large structural variants (1-100Kb in size) using GenomeStrip/2.0. Sequence and genotype data are publicly-available at the corresponding NCBI databases: Short Read Archive, dbSNP and dbVar (BioProject PRJNA282591). We have also released the unfiltered genotype data, and the code and logs for data processing and summary statistics ( https://zenodo.org/communities/sussex_drosophila_sequencing/).

  15. Whole genome resequencing of a laboratory-adapted Drosophila melanogaster population sample

    PubMed Central

    Gilks, William P.; Pennell, Tanya M.; Flis, Ilona; Webster, Matthew T.; Morrow, Edward H.

    2016-01-01

    As part of a study into the molecular genetics of sexually dimorphic complex traits, we used high-throughput sequencing to obtain data on genomic variation in an outbred laboratory-adapted fruit fly ( Drosophila melanogaster) population. We successfully resequenced the whole genome of 220 hemiclonal females that were heterozygous for the same Berkeley reference line genome (BDGP6/dm6), and a unique haplotype from the outbred base population (LH M). The use of a static and known genetic background enabled us to obtain sequences from whole-genome phased haplotypes. We used a BWA-Picard-GATK pipeline for mapping sequence reads to the dm6 reference genome assembly, at a median depth-of coverage of 31X, and have made the resulting data publicly-available in the NCBI Short Read Archive (Accession number SRP058502). We used Haplotype Caller to discover and genotype 1,726,931 small genomic variants (SNPs and indels, <200bp). Additionally we detected and genotyped 167 large structural variants (1-100Kb in size) using GenomeStrip/2.0. Sequence and genotype data are publicly-available at the corresponding NCBI databases: Short Read Archive, dbSNP and dbVar (BioProject PRJNA282591). We have also released the unfiltered genotype data, and the code and logs for data processing and summary statistics ( https://zenodo.org/communities/sussex_drosophila_sequencing/). PMID:27928499

  16. Psychometric properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD sample.

    PubMed

    Wolf, Erika J; Harrington, Kelly M; Miller, Mark W

    2011-12-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the Multidimensional Personality Questionnaire (Patrick, Curtin, & Tellegen, 2002). In a subsample of 86 veterans, the SNAP PD, trait, and temperament scales were also evaluated in comparison to the International Personality Disorder Examination (IPDE; Loranger, 1999), a semistructured diagnostic interview. Results revealed that the SNAP scales have good convergent validity, as evidenced by their pattern of associations with related measures of personality and PD. However, evidence for their discriminant validity in relationship to other measures of personality and PD was more mixed, and test scores on the SNAP trait and temperament scales left much unexplained variance in IPDE-assessed PDs. The diagnostic scoring of the SNAP PD scales greatly inflated prevalence estimates of PDs relative to the IPDE and showed poor agreement with the IPDE. In contrast, the dimensional SNAP scores yielded far stronger associations with continuous scores on the IPDE. The SNAP scales also largely evidenced expected patterns of association with a measure of PTSD severity. Overall, findings support the use of this measure in this population and contribute to our conceptualization of the association between temperament, PTSD, and Axis II psychopathology.

  17. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  18. Compact Ocean Models Enable Onboard AUV Autonomy and Decentralized Adaptive Sampling

    DTIC Science & Technology

    2013-09-30

    synoptic information on-board a mobile platform. 2. To benefit from additional information provided by synoptic models, we developed a combination...properties ( chlorophyll -a and absorption due to phytoplankton), the model was able to reproduce intensity and tendencies in surface and subsurface... chlorophyll distributions observed at water samples locations in the Monterey Bay, CA (Figure 3). 5 2a) MODIS Chl-a 2b) without data

  19. Adaptive use of bubble wrap for storing liquid samples and performing analytical assays.

    PubMed

    Bwambok, David K; Christodouleas, Dionysios C; Morin, Stephen A; Lange, Heiko; Phillips, Scott T; Whitesides, George M

    2014-08-05

    This paper demonstrates that the gas-filled compartments in the packing material commonly called "bubble wrap" can be repurposed in resource-limited regions as containers to store liquid samples, and to perform bioanalyses. The bubbles of bubble wrap are easily filled by injecting the samples into them using a syringe with a needle or a pipet tip, and then sealing the hole with nail hardener. The bubbles are transparent in the visible range of the spectrum, and can be used as "cuvettes" for absorbance and fluorescence measurements. The interiors of these bubbles are sterile and allow storage of samples without the need for expensive sterilization equipment. The bubbles are also permeable to gases, and can be used to culture and store micro-organisms. By incorporating carbon electrodes, these bubbles can be used as electrochemical cells. This paper demonstrates the capabilities of the bubbles by culturing E. coli, growing C. elegans, measuring glucose and hemoglobin spectrophotometrically, and measuring ferrocyanide electrochemically, all within the bubbles.

  20. A goal-oriented adaptive finite-element approach for plane wave 3-D electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi

    2013-08-01

    We have developed a novel goal-oriented adaptive mesh refinement approach for finite-element methods to model plane wave electromagnetic (EM) fields in 3-D earth models based on the electric field differential equation. To handle complicated models of arbitrary conductivity, magnetic permeability and dielectric permittivity involving curved boundaries and surface topography, we employ an unstructured grid approach. The electric field is approximated by linear curl-conforming shape functions which guarantee the divergence-free condition of the electric field within each tetrahedron and continuity of the tangential component of the electric field across the interior boundaries. Based on the non-zero residuals of the approximated electric field and the yet to be satisfied boundary conditions of continuity of both the normal component of the total current density and the tangential component of the magnetic field strength across the interior interfaces, three a-posterior error estimators are proposed as a means to drive the goal-oriented adaptive refinement procedure. The first a-posterior error estimator relies on a combination of the residual of the electric field, the discontinuity of the normal component of the total current density and the discontinuity of the tangential component of the magnetic field strength across the interior faces shared by tetrahedra. The second a-posterior error estimator is expressed in terms of the discontinuity of the normal component of the total current density (conduction plus displacement current). The discontinuity of the tangential component of the magnetic field forms the third a-posterior error estimator. Analytical solutions for magnetotelluric (MT) and radiomagnetotelluric (RMT) fields impinging on a homogeneous half-space model are used to test the performances of the newly developed goal-oriented algorithms using the above three a-posterior error estimators. A trapezoidal topographical model, using normally incident EM waves

  1. Approaches to Adaptive Active Acoustic Noise Control at a Point Using Feedforward Techniques.

    NASA Astrophysics Data System (ADS)

    Zulch, Peter A.

    Active acoustic noise control systems have been of interest since their birth in the 1930's. The principle is to superimpose on an unwanted noise wave shape its inverse with the intention of destructive interference. This work presents two approaches to this idea. The first approach uses a direct design method to develop a controller using an auto-regressive moving-average (ARMA) model that will be used to condition the primary noise to produce the required anti-noise for cancellation. The development of this approach has shown that the stability of the controller relies heavily on a non-minimum phase model of the secondary noise path. For this reason, a second approach, using a controller consisting of two parts was developed. The first part of the controller is designed to cancel broadband noise and the second part is an adaptive controller designed to cancel periodic noise. A simple technique for identifying the parameters of the broadband controller is developed. An ARMA model is used, and it is shown that its stability is improved by prefiltering the test signal with a minimum-phase inverse of the secondary noise channel. The periodic controller uses an estimate of the fundamental frequency to cancel the first few harmonics of periodic noise. A computationally efficient adaptive technique based on least squares is developed for updating the harmonic controller gains at each time step. Experimental results are included for the broadband controller, the harmonic controller, and the combination of the two algorithms. The advantages of using both techniques in conjunction are shown using test cases involving both broadband noise and periodic noise.

  2. Virtual-system-coupled adaptive umbrella sampling to compute free-energy landscape for flexible molecular docking.

    PubMed

    Higo, Junichi; Dasgupta, Bhaskar; Mashimo, Tadaaki; Kasahara, Kota; Fukunishi, Yoshifumi; Nakamura, Haruki

    2015-07-30

    A novel enhanced conformational sampling method, virtual-system-coupled adaptive umbrella sampling (V-AUS), was proposed to compute 300-K free-energy landscape for flexible molecular docking, where a virtual degrees of freedom was introduced to control the sampling. This degree of freedom interacts with the biomolecular system. V-AUS was applied to complex formation of two disordered amyloid-β (Aβ30-35 ) peptides in a periodic box filled by an explicit solvent. An interpeptide distance was defined as the reaction coordinate, along which sampling was enhanced. A uniform conformational distribution was obtained covering a wide interpeptide distance ranging from the bound to unbound states. The 300-K free-energy landscape was characterized by thermodynamically stable basins of antiparallel and parallel β-sheet complexes and some other complex forms. Helices were frequently observed, when the two peptides contacted loosely or fluctuated freely without interpeptide contacts. We observed that V-AUS converged to uniform distribution more effectively than conventional AUS sampling did.

  3. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  4. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.

    PubMed

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-09-25

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.

  5. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks

    PubMed Central

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-01-01

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731

  6. Adaptive speed/position control of induction motor based on SPR approach

    NASA Astrophysics Data System (ADS)

    Lee, Hou-Tsan

    2014-11-01

    A sensorless speed/position tracking control scheme for induction motors is proposed subject to unknown load torque via adaptive strictly positive real (SPR) approach design. A special nonlinear coordinate transform is first provided to reform the dynamical model of the induction motor. The information on rotor fluxes can thus be derived from the dynamical model to decide on the proportion of input voltage in the d-q frame under the constraint of the maximum power transfer property of induction motors. Based on the SPR approach, the speed and position control objectives can be achieved. The proposed control scheme is to provide the speed/position control of induction motors while lacking the knowledge of some mechanical system parameters, such as the motor inertia, motor damping coefficient, and the unknown payload. The adaptive control technique is thus involved in the field oriented control scheme to deal with the unknown parameters. The thorough proof is derived to guarantee the stability of the speed and position of control systems of induction motors. Besides, numerical simulation and experimental results are also provided to validate the effectiveness of the proposed control scheme.

  7. Wavefront sensorless approaches to adaptive optics for in vivo fluorescence imaging of mouse retina

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel J.; Bonora, Stefano; Mata, Oscar S.; Haunerland, Bengt K.; Zawadzki, Robert J.; Sarunic, Marinko V.; Jian, Yifan

    2016-03-01

    Adaptive optics (AO) is necessary to correct aberrations when imaging the mouse eye with high numerical aperture. In order to obtain cellular resolution, we have implemented wavefront sensorless adaptive optics for in vivo fluorescence imaging of mouse retina. Our approach includes a lens-based system and MEMS deformable mirror for aberration correction. The AO system was constructed with a reflectance channel for structural images and fluorescence channel for functional images. The structural imaging was used in real-time for navigation on the retina using landmarks such as blood vessels. We have also implemented a tunable liquid lens to select the retinal layer of interest at which to perform the optimization. At the desired location on the mouse retina, the optimization algorithm used the fluorescence image data to drive a modal hill-climbing algorithm using an intensity or sharpness image quality metric. The optimization requires ~30 seconds to complete a search up to the 20th Zernike mode. In this report, we have demonstrated the AO performance for high-resolution images of the capillaries in a fluorescence angiography. We have also made progress on an approach to AO with pupil segmentation as a possible sensorless technique suitable for small animal retinal imaging. Pupil segmentation AO was implemented on the same ophthalmic system and imaging performance was demonstrated on fluorescent beads with induced aberrations.

  8. Behavior Change Interventions to Improve the Health of Racial and Ethnic Minority Populations: A Tool Kit of Adaptation Approaches

    PubMed Central

    Davidson, Emma M; Liu, Jing Jing; Bhopal, Raj; White, Martin; Johnson, Mark RD; Netto, Gina; Wabnitz, Cecile; Sheikh, Aziz

    2013-01-01

    Context Adapting behavior change interventions to meet the needs of racial and ethnic minority populations has the potential to enhance their effectiveness in the target populations. But because there is little guidance on how best to undertake these adaptations, work in this field has proceeded without any firm foundations. In this article, we present our Tool Kit of Adaptation Approaches as a framework for policymakers, practitioners, and researchers interested in delivering behavior change interventions to ethnically diverse, underserved populations in the United Kingdom. Methods We undertook a mixed-method program of research on interventions for smoking cessation, increasing physical activity, and promoting healthy eating that had been adapted to improve salience and acceptability for African-, Chinese-, and South Asian–origin minority populations. This program included a systematic review (reported using PRISMA criteria), qualitative interviews, and a realist synthesis of data. Findings We compiled a richly informative data set of 161 publications and twenty-six interviews detailing the adaptation of behavior change interventions and the contexts in which they were undertaken. On the basis of these data, we developed our Tool Kit of Adaptation Approaches, which contains (1) a forty-six-item Typology of Adaptation Approaches; (2) a Pathway to Adaptation, which shows how to use the Typology to create a generic behavior change intervention; and (3) RESET, a decision tool that provides practical guidance on which adaptations to use in different contexts. Conclusions Our Tool Kit of Adaptation Approaches provides the first evidence-derived suite of materials to support the development, design, implementation, and reporting of health behavior change interventions for minority groups. The Tool Kit now needs prospective, empirical evaluation in a range of intervention and population settings. PMID:24320170

  9. Massively parallel sampling of lattice proteins reveals foundations of thermal adaptation

    NASA Astrophysics Data System (ADS)

    Venev, Sergey V.; Zeldovich, Konstantin B.

    2015-08-01

    Evolution of proteins in bacteria and archaea living in different conditions leads to significant correlations between amino acid usage and environmental temperature. The origins of these correlations are poorly understood, and an important question of protein theory, physics-based prediction of types of amino acids overrepresented in highly thermostable proteins, remains largely unsolved. Here, we extend the random energy model of protein folding by weighting the interaction energies of amino acids by their frequencies in protein sequences and predict the energy gap of proteins designed to fold well at elevated temperatures. To test the model, we present a novel scalable algorithm for simultaneous energy calculation for many sequences in many structures, targeting massively parallel computing architectures such as graphics processing unit. The energy calculation is performed by multiplying two matrices, one representing the complete set of sequences, and the other describing the contact maps of all structural templates. An implementation of the algorithm for the CUDA platform is available at http://www.github.com/kzeldovich/galeprot and calculates protein folding energies over 250 times faster than a single central processing unit. Analysis of amino acid usage in 64-mer cubic lattice proteins designed to fold well at different temperatures demonstrates an excellent agreement between theoretical and simulated values of energy gap. The theoretical predictions of temperature trends of amino acid frequencies are significantly correlated with bioinformatics data on 191 bacteria and archaea, and highlight protein folding constraints as a fundamental selection pressure during thermal adaptation in biological evolution.

  10. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, E.F.; Boase, J.; Kennedy, G.; Craig, J.; Soper, K.

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8m) Detroit River using an anchor and buoy system. ?? 2011 Blackwell Verlag, Berlin.

  11. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  12. Integrated approaches to natural resources management in practice: the catalyzing role of National Adaptation Programmes for Action.

    PubMed

    Stucki, Virpi; Smith, Mark

    2011-06-01

    The relationship of forests in water quantity and quality has been debated during the past years. At the same time, focus on climate change has increased interest in ecosystem restoration as a means for adaptation. Climate change might become one of the key drivers pushing integrated approaches for natural resources management into practice. The National Adaptation Programme of Action (NAPA) is an initiative agreed under the UN Framework Convention on Climate Change. An analysis was done to find out how widely ecosystem restoration and integrated approaches have been incorporated into NAPA priority adaptation projects. The data show that that the NAPAs can be seen as potentially important channel for operationalizing various integrated concepts. Key challenge is to implement the NAPA projects. The amount needed to implement the NAPA projects aiming at ecosystem restoration using integrated approaches presents only 0.7% of the money pledged in Copenhagen for climate change adaptation.

  13. A margin based approach to determining sample sizes via tolerance bounds.

    SciTech Connect

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  14. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  15. Improving satellite-retrieved surface radiative fluxes in polar regions using a smart sampling approach

    NASA Astrophysics Data System (ADS)

    Van Tricht, Kristof; Lhermitte, Stef; Gorodetskaya, Irina V.; van Lipzig, Nicole P. M.

    2016-10-01

    The surface energy budget (SEB) of polar regions is key to understanding the polar amplification of global climate change and its worldwide consequences. However, despite a growing network of ground-based automatic weather stations that measure the radiative components of the SEB, extensive areas remain where no ground-based observations are available. Satellite remote sensing has emerged as a potential solution to retrieve components of the SEB over remote areas, with radar and lidar aboard the CloudSat and CALIPSO satellites among the first to enable estimates of surface radiative long-wave (LW) and short-wave (SW) fluxes based on active cloud observations. However, due to the small swath footprints, combined with a return cycle of 16 days, questions arise as to how CloudSat and CALIPSO observations should be optimally sampled in order to retrieve representative fluxes for a given location. Here we present a smart sampling approach to retrieve downwelling surface radiative fluxes from CloudSat and CALIPSO observations for any given land-based point-of-interest (POI) in polar regions. The method comprises a spatial correction that allows the distance between the satellite footprint and the POI to be increased in order to raise the satellite sampling frequency. Sampling frequency is enhanced on average from only two unique satellite overpasses each month for limited-distance sampling < 10 km from the POI, to 35 satellite overpasses for the smart sampling approach. This reduces the root-mean-square errors on monthly mean flux estimates compared to ground-based measurements from 23 to 10 W m-2 (LW) and from 43 to 14 W m-2 (SW). The added value of the smart sampling approach is shown to be largest on finer temporal resolutions, where limited-distance sampling suffers from severely limited sampling frequencies. Finally, the methodology is illustrated for Pine Island Glacier (Antarctica) and the Greenland northern interior. Although few ground-based observations are

  16. Focussing over the edge: adaptive subsurface laser fabrication up to the sample face.

    PubMed

    Salter, P S; Booth, M J

    2012-08-27

    Direct laser writing is widely used for fabrication of subsurface, three dimensional structures in transparent media. However, the accessible volume is limited by distortion of the focussed beam at the sample edge. We determine the aberrated focal intensity distribution for light focused close to the edge of the substrate. Aberrations are modelled by dividing the pupil into two regions, each corresponding to light passing through the top and side facets. Aberration correction is demonstrated experimentally using a liquid crystal spatial light modulator for femtosecond microfabrication in fused silica. This technique allows controlled subsurface fabrication right up to the edge of the substrate. This can benefit a wide range of applications using direct laser writing, including the manufacture of waveguides and photonic crystals.

  17. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  18. A quantitative proteomic approach to highlight Phragmites sp. adaptation mechanisms to chemical stress induced by a textile dyeing pollutant.

    PubMed

    Ferreira, R A; Roma-Rodrigues, C; Davies, L C; Sá-Correia, I; Martins-Dias, S

    2016-12-15

    Phragmites sp. is present worldwide in treatment wetlands though the mechanisms involved in the phytoremediation remain unclear. In this study a quantitative proteomic approach was used to study the prompt response and adaptation of Phragmites to the textile dyeing pollutant, Acid Orange 7 (AO7). Previously, it was demonstrated that AO7 could be successfully removed from wastewater and mineralized in a constructed wetland planted with Phragmites sp. This azo dye is readily taken up by roots and transported to the plant aerial part by the xylem. Phragmites leaf samples were collected from a pilot scale vertical flow constructed wetland after 0.25, 3.25 and 24.25h exposure to AO7 (400mgL(-1)) immediately after a watering cycle used as control. Leaf soluble protein extraction yielded an average of 1560 proteins in a broad pI range (pH3-10) by two-dimensional gel electrophoresis. A time course comparative analysis of leaf proteome revealed that 40 proteins had a differential abundance compared to control (p<0.05) within a 3.25h period. After 24.25h in contact with AO7, leaf proteome was similar to control. Adaptation to AO7 involved proteins related with cellular signalling (calreticulin, Ras-related protein Rab11D and 20S proteasome), energy production and conversion (adenosine triphosphate synthase beta subunit) carbohydrate transport and metabolism (phosphoglucose isomerase, fructose-bisphosphate aldolase, monodehydroascorbate reductase, frutockinase-1 and Hypothetical protein POPTR_0003s12000g and the Uncharacterized protein LOC100272772) and photosynthesis (sedoheptulose-1,7-bisphosphatase and ferredoxin-NADP(+) reductase). Therefore, the quantitative proteomic approach used in this work indicates that mechanisms associated with stress cell signalling, energy production, carbohydrate transport and metabolism as well as proteins related with photosynthesis are key players in the initial chemical stress response in the phytoremediation process of AO7.

  19. A User-Driven and Data-Driven Approach for Supporting Teachers in Reflection and Adaptation of Adaptive Tutorials

    ERIC Educational Resources Information Center

    Ben-Naim, Dror; Bain, Michael; Marcus, Nadine

    2009-01-01

    It has been recognized that in order to drive Intelligent Tutoring Systems (ITSs) into mainstream use by the teaching community, it is essential to support teachers through the entire ITS process: Design, Development, Deployment, Reflection and Adaptation. Although research has been done on supporting teachers through design to deployment of ITSs,…

  20. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  1. Bioagent Sample Matching using Elemental Composition Data: an Approach to Validation

    SciTech Connect

    Velsko, S P

    2006-04-21

    Sample matching is a fundamental capability that can have high probative value in a forensic context if proper validation studies are performed. In this report we discuss the potential utility of using the elemental composition of two bioagent samples to decide if they were produced in the same batch, or by the same process. Using guidance from the recent NRC study of bullet lead analysis and other sources, we develop a basic likelihood ratio framework for evaluating the evidentiary weight of elemental analysis data for sample matching. We define an objective metric for comparing two samples, and propose a method for constructing an unbiased population of test samples. We illustrate the basic methodology with some existing data on dry Bacillus thuringiensis preparations, and outline a comprehensive plan for experimental validation of this approach.

  2. A design and analysis approach for drag reduction on aircraft with adaptive lifting surfaces

    NASA Astrophysics Data System (ADS)

    Cusher, Aaron Anthony

    Adaptive lifting surfaces, which can be tailored for different flight conditions, have been shown to be beneficial for drag reduction when compared with conventional non-adaptive surfaces. Applying multiple trailing-edge flaps along the wing span allows for the redistribution of lift to suit different flight conditions. The current approach uses the trailing-edge flap distribution to reduce both induced- and profile- components of drag with a trim constraint. Induced drag is reduced by optimally redistributing the lift between the lifting surfaces and along the span of each surface. Profile drag is reduced through the use of natural laminar flow airfoils, which maintain distinct low-drag-ranges (drag buckets) surrounding design lift values. The low-drag-ranges can be extended to include off-design values through small flap deflections, similar to cruise flaps. Trim is constrained for a given static margin by considering longitudinal pitching moment contributions from changes in airfoil section due to individual flap deflections, and from the redistribution of fore-and-aft lift due to combination of flap deflections. The approach uses the concept of basic and additional lift to linearlize the problem, which allows for standard constrained-minimization theory to be employed for determining optimal flap-angle solutions. The resulting expressions for optimal flap-angle solutions are presented as simple matrix equations. This work presents a design and analysis approach which is used to produce flap-angle solutions that independently reduce induced, profile, and total drag. Total drag is defined to be the sum of the induced- and profile-components of drag. The general drag reduction approach is adapted for each specific situation to develop specific drag reduction schemes that are applied to single- and multiple-surface configurations. Successful results show that, for the application of the induced drag reduction schemes on a tailless aircraft, near-elliptical lift

  3. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  4. Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach

    PubMed Central

    Hussain, Ijaz; Shoukry, Alaa Mohamd

    2016-01-01

    We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game. PMID:27936039

  5. Stressful Situations at Work and in Private Life among Young Workers: An Event Sampling Approach

    ERIC Educational Resources Information Center

    Grebner, Simone; Elfering, Achim; Semmer, Norbert K.; Kaiser-Probst, Claudia; Schlapbach, Marie-Louise

    2004-01-01

    Most studies on occupational stress concentrate on chronic conditions, whereas research on stressful situations is rather sparse. Using an event-sampling approach, 80 young workers reported stressful events over 7 days (409 work-related and 127 private events). Content analysis showed the newcomers' work experiences to be similar to what is…

  6. Evaluation of PCR Approaches for Detection of Bartonella bacilliformis in Blood Samples

    PubMed Central

    Gomes, Cláudia; Martinez-Puchol, Sandra; Pons, Maria J.; Bazán, Jorge; Tinco, Carmen; del Valle, Juana; Ruiz, Joaquim

    2016-01-01

    Background The lack of an effective diagnostic tool for Carrion’s disease leads to misdiagnosis, wrong treatments and perpetuation of asymptomatic carriers living in endemic areas. Conventional PCR approaches have been reported as a diagnostic technique. However, the detection limit of these techniques is not clear as well as if its usefulness in low bacteriemia cases. The aim of this study was to evaluate the detection limit of 3 PCR approaches. Methodology/Principal Findings We determined the detection limit of 3 different PCR approaches: Bartonella-specific 16S rRNA, fla and its genes. We also evaluated the viability of dry blood spots to be used as a sample transport system. Our results show that 16S rRNA PCR is the approach with a lowest detection limit, 5 CFU/μL, and thus, the best diagnostic PCR tool studied. Dry blood spots diminish the sensitivity of the assay. Conclusions/Significance From the tested PCRs, the 16S rRNA PCR-approach is the best to be used in the direct blood detection of acute cases of Carrion’s disease. However its use in samples from dry blood spots results in easier management of transport samples in rural areas, a slight decrease in the sensitivity was observed. The usefulness to detect by PCR the presence of low-bacteriemic or asymptomatic carriers is doubtful, showing the need to search for new more sensible techniques. PMID:26959642

  7. An adaptive finite element approach to modelling sediment laden density currents

    NASA Astrophysics Data System (ADS)

    Parkinson, S.; Hill, J.; Allison, P. A.; Piggott, M. D.

    2012-04-01

    , and with significantly shorter run times, using a dynamic adaptive mesh approach.

  8. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  9. Unbiased parallel detection of viral pathogens in clinical samples by use of a metagenomic approach.

    PubMed

    Yang, Jian; Yang, Fan; Ren, Lili; Xiong, Zhaohui; Wu, Zhiqiang; Dong, Jie; Sun, Lilian; Zhang, Ting; Hu, Yongfeng; Du, Jiang; Wang, Jianwei; Jin, Qi

    2011-10-01

    Viral infectious diseases represent a major threat to public health and are among the greatest disease burdens worldwide. Rapid and accurate identification of viral agents is crucial for both outbreak control and estimating regional disease burdens. Recently developed metagenomic methods have proven to be powerful tools for simultaneous pathogen detection. Here, we performed a systematic study of the capability of the short-read-based metagenomic approach in the molecular detection of viral pathogens in nasopharyngeal aspirate samples from patients with acute lower respiratory tract infections (n = 16). Using the high-throughput capacity of ultradeep sequencing and a dedicated data interpretation method, we successfully identified seven species of known respiratory viral agents from 15 samples, a result that was consistent with results of conventional PCR assays. We also detected a coinfected case that was missed by regular PCR testing. Using the metagenomic data, 11 draft genomes of the abundantly detected viruses in the samples were reconstructed with 21.84% to 98.53% coverage. Our results show the power of the short-read-based metagenomic approach for accurate and parallel screening of viral pathogens. Although there are some inherent difficulties in applying this approach to clinical samples, including a lack of controls, limited specimen quantity, and high contamination rate, our work will facilitate further application of this unprecedented high-throughput method to clinical samples.

  10. Stable direct adaptive control of linear infinite-dimensional systems using a command generator tracker approach

    NASA Technical Reports Server (NTRS)

    Balas, Mark; Kaufman, Howard; Wen, John

    1984-01-01

    The topics are presented in view graph form and include the following: an adaptive model following control; adaptive control of a distributed parameter system (DPS) with a finite-dimensional controller; a direct adaptive controller; a closed-loop adaptively controlled DPS; Lyapunov stability; the asymptotic stability of the closed loop; and model control of a simply supported beam.

  11. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Huff, Daniel W.

    computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.

  12. Seeking mathematics success for college students: a randomized field trial of an adapted approach

    NASA Astrophysics Data System (ADS)

    Gula, Taras; Hoessler, Carolyn; Maciejewski, Wes

    2015-11-01

    Many students enter the Canadian college system with insufficient mathematical ability and leave the system with little improvement. Those students who enter with poor mathematics ability typically take a developmental mathematics course as their first and possibly only mathematics course. The educational experiences that comprise a developmental mathematics course vary widely and are, too often, ineffective at improving students' ability. This trend is concerning, since low mathematics ability is known to be related to lower rates of success in subsequent courses. To date, little attention has been paid to the selection of an instructional approach to consistently apply across developmental mathematics courses. Prior research suggests that an appropriate instructional method would involve explicit instruction and practising mathematical procedures linked to a mathematical concept. This study reports on a randomized field trial of a developmental mathematics approach at a college in Ontario, Canada. The new approach is an adaptation of the JUMP Math program, an explicit instruction method designed for primary and secondary school curriculae, to the college learning environment. In this study, a subset of courses was assigned to JUMP Math and the remainder was taught in the same style as in the previous years. We found consistent, modest improvement in the JUMP Math sections compared to the non-JUMP sections, after accounting for potential covariates. The findings from this randomized field trial, along with prior research on effective education for developmental mathematics students, suggest that JUMP Math is a promising way to improve college student outcomes.

  13. An adaptive toolbox approach to the route to expertise in sport.

    PubMed

    de Oliveira, Rita F; Lobinger, Babett H; Raab, Markus

    2014-01-01

    Expertise is characterized by fast decision-making which is highly adaptive to new situations. Here we propose that athletes use a toolbox of heuristics which they develop on their route to expertise. The development of heuristics occurs within the context of the athletes' natural abilities, past experiences, developed skills, and situational context, but does not pertain to any of these factors separately. This is a novel approach because it integrates separate factors into a comprehensive heuristic description. The novelty of this approach lies within the integration of separate factors determining expertise into a comprehensive heuristic description. It is our contention that talent identification methods and talent development models should therefore be geared toward the assessment and development of specific heuristics. Specifically, in addition to identifying and developing separate natural abilities and skills as per usual, heuristics should be identified and developed. The application of heuristics to talent and expertise models can bring the field one step away from dichotomized models of nature and nurture toward a comprehensive approach to the route to expertise.

  14. Adaptation policies to increase terrestrial ecosystem resilience. Potential utility of a multicriteria approach

    SciTech Connect

    de Bremond, Ariane; Engle, Nathan L.

    2014-01-30

    Climate change is rapidly undermining terrestrial ecosystem resilience and capacity to continue providing their services to the benefit of humanity and nature. Because of the importance of terrestrial ecosystems to human well-being and supporting services, decision makers throughout the world are busy creating policy responses that secure multiple development and conservation objectives- including that of supporting terrestrial ecosystem resilience in the context of climate change. This article aims to advance analyses on climate policy evaluation and planning in the area of terrestrial ecosystem resilience by discussing adaptation policy options within the ecology-economy-social nexus. The paper evaluates these decisions in the realm of terrestrial ecosystem resilience and evaluates the utility of a set of criteria, indicators, and assessment methods, proposed by a new conceptual multi-criteria framework for pro-development climate policy and planning developed by the United Nations Environment Programme. Potential applications of a multicriteria approach to climate policy vis-A -vis terrestrial ecosystems are then explored through two hypothetical case study examples. The paper closes with a brief discussion of the utility of the multi-criteria approach in the context of other climate policy evaluation approaches, considers lessons learned as a result efforts to evaluate climate policy in the realm of terrestrial ecosystems, and reiterates the role of ecosystem resilience in creating sound policies and actions that support the integration of climate change and development goals.

  15. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  16. Detection of synchronization between chaotic signals: An adaptive similarity-based approach

    NASA Astrophysics Data System (ADS)

    Chen, Shyan-Shiou; Chen, Li-Fen; Wu, Yu-Te; Wu, Yu-Zu; Lee, Po-Lei; Yeh, Tzu-Chen; Hsieh, Jen-Chuen

    2007-12-01

    We present an adaptive similarity-based approach to detect generalized synchronization (GS) with n:m phase synchronization (PS), where n and m are integers and one of them is 1. This approach is based on the similarity index (SI) and Gaussian mixture model with the minimum description length criterion. The clustering method, which is shown to be superior to the closeness and connectivity of a continuous function, is employed in this study to detect the existence of GS with n:m PS. We conducted a computer simulation and a finger-lifting experiment to illustrate the effectiveness of the proposed method. In the simulation of a Rössler-Lorenz system, our method outperformed the conventional SI, and GS with 2:1 PS within the coupled system was found. In the experiment of self-paced finger-lifting movement, cortico-muscular GS with 1:2 and 1:3 PS was found between the surface electromyogram signals on the first dorsal interossei muscle and the magnetoencephalographic data in the motor area. The GS with n:m PS ( n or m=1 ) has been simultaneously resolved from both simulation and experiment. The proposed approach thereby provides a promising means for advancing research into both nonlinear dynamics and brain science.

  17. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  18. Signatures of local adaptation in lowland and highland teosintes from whole-genome sequencing of pooled samples.

    PubMed

    Fustier, M-A; Brandenburg, J-T; Boitard, S; Lapeyronnie, J; Eguiarte, L E; Vigouroux, Y; Manicacci, D; Tenaillon, M I

    2017-03-03

    Spatially varying selection triggers differential adaptation of local populations. Here, we mined the determinants of local adaptation at the genomewide scale in the two closest maize wild relatives, the teosintes Zea mays ssp parviglumis and ssp. mexicana. We sequenced 120 individuals from six populations: two lowland, two intermediate and two highland populations sampled along two altitudinal gradients. We detected 8 479 581 single nucleotide polymorphisms (SNPs) covered in the six populations with an average sequencing depth per site per population ranging from 17.0× to 32.2×. Population diversity varied from 0.10 to 0.15, and linkage disequilibrium decayed very rapidly. We combined two differentiation-based methods, and correlation of allele frequencies with environmental variables to detect outlier SNPs. Outlier SNPs displayed significant clustering. From clusters, we identified 47 candidate regions. We further modified a haplotype-based method to incorporate genotype uncertainties in haplotype calling, and applied it to candidate regions. We retrieved evidence for selection at the haplotype level in 53% of our candidate regions, and in 70% of the cases the same haplotype was selected in the two lowland or the two highland populations. We recovered a candidate region located within a previously characterized inversion on chromosome 1. We found evidence of a soft sweep at a locus involved in leaf macrohair variation. Finally, our results revealed frequent colocalization between our candidate regions and loci involved in the variation of traits associated with plant-soil interactions such as root morphology, aluminium and low phosphorus tolerance. Soil therefore appears to be a major driver of local adaptation in teosintes.

  19. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  20. Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.

    PubMed

    Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao

    2017-04-01

    Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.

  1. Guiding heat in laser ablation of metals on ultrafast timescales: an adaptive modeling approach on aluminum

    NASA Astrophysics Data System (ADS)

    Colombier, J. P.; Combis, P.; Audouard, E.; Stoian, R.

    2012-01-01

    Using an optimal control hydrodynamic modeling approach and irradiation adaptive time-design, we indicate excitation channels maximizing heat load in laser ablated aluminum at low energy costs. The primary relaxation paths leading to an emerging plasma are particularly affected. With impulsive pulses on ps pedestals, thermodynamic trajectories are preferentially guided in ionized domains where variations in ionization degree occur. This impinges on the gas-transformation mechanisms and triggers a positive bremsstrahlung absorption feedback. The highest temperatures are thus obtained in the expanding ionized matter after a final impulsive excitation, as the electronic energy relaxes recombinatively. The drive relies on transitions to weakly coupled front plasmas at the critical optical density, favoring energy confinement with low mechanical work. Alternatively, robust collisional heating occurs in denser regions above the critical point. This impacts the nature, the excitation degree and the energy content of the ablated matter. Adaptive modeling can therefore provide optimal strategies with information on physical variables not readily accessible and, as experimentally confirmed, databases for pulse shapes with interest in remote spectroscopy, laser-induced matter transfer, laser material processing and development of secondary sources.

  2. Formation tracker design of multiple mobile robots with wheel perturbations: adaptive output-feedback approach

    NASA Astrophysics Data System (ADS)

    Yoo, Sung Jin

    2016-11-01

    This paper presents a theoretical design approach for output-feedback formation tracking of multiple mobile robots under wheel perturbations. It is assumed that these perturbations are unknown and the linear and angular velocities of the robots are unmeasurable. First, adaptive state observers for estimating unmeasurable velocities of the robots are developed under the robots' kinematics and dynamics including wheel perturbation effects. Then, we derive a virtual-structure-based formation tracker scheme according to the observer dynamic surface design procedure. The main difficulty of the output-feedback control design is to manage the coupling problems between unmeasurable velocities and unknown wheel perturbation effects. These problems are avoided by using the adaptive technique and the function approximation property based on fuzzy logic systems. From the Lyapunov stability analysis, it is shown that point tracking errors of each robot and synchronisation errors for the desired formation converge to an adjustable neighbourhood of the origin, while all signals in the controlled closed-loop system are semiglobally uniformly ultimately bounded.

  3. Force-induced bone growth and adaptation: A system theoretical approach to understanding bone mechanotransduction

    NASA Astrophysics Data System (ADS)

    Maldonado, Solvey; Findeisen, Rolf

    2010-06-01

    The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.

  4. A single weighting approach to analyze respondent-driven sampling data

    PubMed Central

    Selvaraj, Vadivoo; Boopathi, Kangusamy; Paranjape, Ramesh; Mehendale, Sanjay

    2016-01-01

    Background and objectives: Respondent-driven sampling (RDS) is widely used to sample hidden populations and RDS data are analyzed using specially designed RDS analysis tool (RDSAT). RDSAT estimates parameters such as proportions. Analysis with RDSAT requires separate weight assignment for individual variables even in a single individual; hence, regression analysis is a problem. RDS-analyst is another advanced software that can perform three methods of estimates, namely, successive sampling method, RDS I and RDS II. All of these are in the process of refinement and need special skill to perform analysis. We propose a simple approach to analyze RDS data for comprehensive statistical analysis using any standard statistical software. Methods: We proposed an approach (RDS-MOD - respondent driven sampling-modified) that determines a single normalized weight (similar to RDS II of Volz-Heckathorn) for each participant. This approach converts the RDS data into clustered data to account the pre-existing relationship between recruits and the recruiters. Further, Taylor's linearization method was proposed for calculating confidence intervals for the estimates. Generalized estimating equation approach was used for regression analysis and parameter estimates of different software were compared. Results: The parameter estimates such as proportions obtained by our approach were matched with those from currently available special software for RDS data. Interpretation & conclusions: The proposed weight was comparable to different weights generated by RDSAT. The estimates were comparable to that by RDS II approach. RDS-MOD provided an efficient and easy-to-use method of estimation and regression accounting inter-individual recruits’ dependence. PMID:28139544

  5. Building the framework for climate change adaptation in the urban areas using participatory approach: the Czech Republic experience

    NASA Astrophysics Data System (ADS)

    Emmer, Adam; Hubatová, Marie; Lupač, Miroslav; Pondělíček, Michael; Šafařík, Miroslav; Šilhánková, Vladimíra; Vačkář, David

    2016-04-01

    The Czech Republic has experienced numerous extreme hydrometeorological / climatological events such as floods (significant ones in 1997, 2002, 2010, 2013), droughts (2013, 2015), heat waves (2015) and windstorms (2007) during past decades. These events are generally attributed to the ongoing climate change and caused loss of lives and significant material damages (up to several % of GDP in some years), especially in urban areas. To initiate the adaptation process of urban areas, the main objective was to prepare a framework for creating climate change adaptation strategies of individual cities reflecting physical-geographical and socioeconomical conditions of the Czech Republic. Three pilot cities (Hradec Králové, Žďár nad Sázavou, Dobru\\vska) were used to optimize entire procedure. Two sets of participatory seminars were organised in order to involve all key stakeholders (the city council, department of the environment, department of the crisis management, hydrometeorological institute, local experts, ...) into the process of creation of the adaptation strategy from its early stage. Lesson learned for the framework were related especially to its applicability on a local level, which is largely a matter of the understandability of the concept. Finally, this illustrative and widely applicable framework (so called 'road map to adaptation strategy') includes five steps: (i) analysis of existing strategies and plans on national, regional and local levels; (ii) analysing climate-change related hazards and key vulnerabilities; (iii) identification of adaptation needs, evaluation of existing adaptation capacity and formulation of future adaptation priorities; (iv) identification of limits and barriers for the adaptation (economical, environmental, ...); and (v) selection of specific types of adaptation measures reflecting identified adaptation needs and formulated adaptation priorities. Keywords: climate change adaptation (CCA); urban areas; participatory approach

  6. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  7. Optimal control based on adaptive model reduction approach to control transfer phenomena

    NASA Astrophysics Data System (ADS)

    Oulghelou, Mourad; Allery, Cyrille

    2017-01-01

    The purpose of optimal control is to act on a set of parameters characterizing a dynamical system to achieve a target dynamics. In order to reduce CPU time and memory storage needed to perform control on evolution systems, it is possible to use reduced order models (ROMs). The mostly used one is the Proper Orthogonal Decomposition (POD). However the bases constructed in this way are sensitive to the configuration of the dynamical system. Consequently, the need of full simulations to build a basis for each configuration is time consuming and makes that approach still relatively expensive. In this paper, to overcome this difficulty we suggest to use an adequate bases interpolation method. It consists in computing the associated bases to a distribution of control parameters. These bases are afterwards called in the control algorithm to build a reduced basis adapted to a given control parameter. This interpolation method involves results of the calculus of Geodesics on Grassmann manifold.

  8. An adaptive learning approach for 3-D surface reconstruction from point clouds.

    PubMed

    Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia

    2008-06-01

    In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.

  9. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the National Aeronautics and Space Administration (NASA) Gulfstream GIII testbed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with the Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight. A combination of industry and NASA standard practice require various structural analyses, ground testing, and health monitoring techniques for showing an airworthy structure. This paper provides an overview of compliant structures design, the structural ground testing leading up to flight, and the flight envelope expansion and monitoring strategy. Flight data will be presented, and lessons learned along the way will be highlighted.

  10. A disturbance observer-based adaptive control approach for flexure beam nano manipulators.

    PubMed

    Zhang, Yangming; Yan, Peng; Zhang, Zhen

    2016-01-01

    This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%.

  11. Speed-up of Markov Chain Monte Carlo Simulation Using Self-Adaptive Different Evolution with Subspace Sampling

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2007-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used in fields ranging from physics and chemistry, to finance, economics and statistical inference for estimating the average properties of complex systems. The convergence rate of MCMC schemes is often observed, however to be disturbingly low, limiting its practical use in many applications. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves. Here we show that significant improvements to the efficiency of MCMC algorithms can be made by using a self-adaptive Differential Evolution search strategy within a population-based evolutionary framework. This scheme differs fundamentally from existing MCMC algorithms, in that trial jumps are simply a fixed multiple of the difference of randomly chosen members of the population using various genetic operators that are adaptively updated during the search. In addition, the algorithm includes randomized subspace sampling to further improve convergence and acceptance rate. Detailed balance and ergodicity of the algorithm are proved, and hydrologic examples show that the proposed method significantly enhances the efficiency and applicability of MCMC simulations to complex, multi-modal search problems.

  12. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  13. Technical note: An improved approach to determining background aerosol concentrations with PILS sampling on aircraft

    NASA Astrophysics Data System (ADS)

    Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.

    2016-07-01

    Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.

  14. Approaches to adaptive digital control focusing on the second order modal descriptions of large, flexible spacecraft dynamics

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.

    1979-01-01

    The widespread modal analysis of flexible spacecraft and recognition of the poor a priori parameterization possible of the modal descriptions of individual structures have prompted the consideration of adaptive modal control strategies for distributed parameter systems. The current major approaches to computationally efficient adaptive digital control useful in these endeavors are explained in an original, lucid manner using modal second order structure dynamics for algorithm explication. Difficulties in extending these lumped-parameter techniques to distributed-parameter system expansion control are cited.

  15. Adaptation of a speciation sampling cartridge for measuring ammonia flux from cattle feedlots using relaxed eddy accumulation

    NASA Astrophysics Data System (ADS)

    Baum, K. A.; Ham, J. M.

    Improved measurements of ammonia losses from cattle feedlots are needed to quantify the national NH 3 emissions inventory and evaluate management techniques for reducing emissions. Speciation cartridges composed of glass honeycomb denuders and filter packs were adapted to measure gaseous NH 3 and aerosol NH 4+ fluxes using relaxed eddy accumulation (REA). Laboratory testing showed that a cartridge equipped with four honeycomb denuders had a total capture capacity of 1800 μg of NH 3. In the field, a pair of cartridges was deployed adjacent to a sonic anemometer and an open-path gas analyzer on a mobile tower. High-speed valves were attached to the inlets of the cartridges and controlled by a datalogger so that up- and down-moving eddies were independently sampled based on direction of the vertical wind speed and a user-defined deadband. Air flowed continuously through the cartridges even when not sampling by means of a recirculating air handling system. Eddy covariance measurement of CO 2 and H 2O, as measured by the sonic and open-path gas analyzer, were used to determine the relaxation factor needed to compute REA-based fluxes. The REA system was field tested at the Beef Research Unit at Kansas State University in the summer and fall of 2007. Daytime NH 3 emissions ranged between 68 and 127 μg m -2 s -1; fluxes tended to follow a diurnal pattern correlated with latent heat flux. Daily fluxes of NH 3 were between 2.5 and 4.7 g m -2 d -1 and on average represented 38% of fed nitrogen. Aerosol NH 4+ fluxes were negligible compared with NH 3 emissions. An REA system designed around the high-capacity speciation cartridges can be used to measure NH 3 fluxes from cattle feedlots and other strong sources. The system could be adapted to measure fluxes of other gases and aerosols.

  16. The importance of training strategy adaptation: a learner-oriented approach for improving older adults' memory and transfer.

    PubMed

    Bottiroli, Sara; Cavallini, Elena; Dunlosky, John; Vecchi, Tomaso; Hertzog, Christopher

    2013-09-01

    We investigated the benefits of strategy-adaptation training for promoting transfer effects. This learner-oriented approach--which directly encourages the learner to generalize strategic behavior to new tasks--helps older adults appraise new tasks and adapt trained strategies to them. In Experiment 1, older adults in a strategy-adaptation training group used 2 strategies (imagery and sentence generation) while practicing 2 tasks (list and associative learning); they were then instructed on how to do a simple task analysis to help them adapt the trained strategies for 2 different unpracticed tasks (place learning and text learning) that were discussed during training. Two additional criterion tasks (name-face associative learning and grocery-list learning) were never mentioned during training. Two other groups were included: A strategy training group (who received strategy training and transfer instructions but not strategy-adaptation training) and a waiting-list control group. Both training procedures enhanced older adults' performance on the trained tasks and those tasks that were discussed during training, but transfer was greatest after strategy-adaptation training. Experiment 2 found that strategy-adaptation training conducted via a manual that older adults used at home also promoted transfer. These findings demonstrate the importance of adopting a learner-oriented approach to promote transfer of strategy training.

  17. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  18. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  19. Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4.

    PubMed

    Voet, Arnout R D; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y J

    2014-04-01

    The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.

  20. Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4

    NASA Astrophysics Data System (ADS)

    Voet, Arnout R. D.; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y. J.

    2014-04-01

    The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.

  1. Implementing an Accurate and Rapid Sparse Sampling Approach for Low-Dose Atomic Resolution STEM Imaging

    SciTech Connect

    Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.

    2016-10-17

    Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.

  2. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of

  3. Approaches to sampling and case selection in qualitative research: examples in the geography of health.

    PubMed

    Curtis, S; Gesler, W; Smith, G; Washburn, S

    2000-04-01

    This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.

  4. An adaptive level set approach for incompressible two-phase flows

    SciTech Connect

    Sussman, M.; Almgren, A.S.; Bell, J.B.

    1997-04-01

    In Sussman, Smereka and Osher, a numerical method using the level set approach was formulated for solving incompressible two-phase flow with surface tension. In the level set approach, the interface is represented as the zero level set of a smooth function; this has the effect of replacing the advection of density, which has steep gradients at the interface, with the advection of the level set function, which is smooth. In addition, the interface can merge or break up with no special treatment. The authors maintain the level set function as the signed distance from the interface in order to robustly compute flows with high density ratios and stiff surface tension effects. In this work, they couple the level set scheme to an adaptive projection method for the incompressible Navier-Stokes equations, in order to achieve higher resolution of the interface with a minimum of additional expense. They present two-dimensional axisymmetric and fully three-dimensional results of air bubble and water drop computations.

  5. An Adaptive Community-Based Participatory Approach to Formative Assessment With High Schools for Obesity Intervention*

    PubMed Central

    Kong, Alberta S.; Farnsworth, Seth; Canaca, Jose A.; Harris, Amanda; Palley, Gabriel; Sussman, Andrew L.

    2013-01-01

    BACKGROUND In the emerging debate around obesity intervention in schools, recent calls have been made for researchers to include local community opinions in the design of interventions. Community-based participatory research (CBPR) is an effective approach for forming community partnerships and integrating local opinions. We used CBPR principles to conduct formative research in identifying acceptable and potentially sustainable obesity intervention strategies in 8 New Mexico school communities. METHODS We collected formative data from 8 high schools on areas of community interest for school health improvement through collaboration with local School Health Advisory Councils (SHACs) and interviews with students and parents. A survey based on formative results was created to assess acceptability of specific intervention strategies and was provided to SHACs. Quantitative data were analyzed using descriptive statistics while qualitative data were evaluated using an iterative analytic process for thematic identification. RESULTS Key themes identified through the formative process included lack of healthy food options, infrequent curricular/extracurricular physical activity opportunities, and inadequate exposure to health/nutritional information. Key strategies identified as most acceptable by SHAC members included healthier food options and preparation, a healthy foods marketing campaign, yearly taste tests, an after-school noncompetitive physical activity program, and community linkages to physical activity opportunities. CONCLUSION An adaptive CBPR approach for formative assessment can be used to identify obesity intervention strategies that address community school health concerns. Eight high school SHACs identified 6 school-based strategies to address parental and student concerns related to obesity. PMID:22320339

  6. An adaptive remaining energy prediction approach for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2016-02-01

    With the growing number of electric vehicle (EV) applications, the function of the battery management system (BMS) becomes more sophisticated. The accuracy of remaining energy estimation is critical for energy optimization and management in EVs. Therefore the state-of-energy (SoE) is defined to indicate the remaining available energy of the batteries. Considering that there are inevitable accumulated errors caused by current and voltage integral method, an adaptive SoE estimator is first established in this paper. In order to establish a reasonable battery equivalent model, based on the experimental data of the LiFePO4 battery, a data-driven model is established to describe the relationship between the open-circuit voltage (OCV) and the SoE. What is more, the forgetting factor recursive least-square (RLS) method is used for parameter identification to get accurate model parameters. Finally, in order to analyze the robustness and the accuracy of the proposed approach, different types of dynamic current profiles are conducted on the lithium-ion batteries and the performances are calculated and compared. The results indicate that the proposed approach has robust and accurate SoE estimation results under dynamic working conditions.

  7. Steady-State Fluorescence of Highly Absorbing Samples in Transmission Geometry: A Simplified Quantitative Approach Considering Reabsorption Events.

    PubMed

    Krimer, Nicolás I; Rodrigues, Darío; Rodríguez, Hernán B; Mirenda, Martín

    2017-01-03

    A simplified methodology to acquire steady-state emission spectra and quantum yields of highly absorbing samples is presented. The experimental setup consists of a commercial spectrofluorometer adapted to transmission geometry, allowing the detection of the emitted light at 180° with respect to the excitation beam. The procedure includes two different mathematical approaches to describe and reproduce the distortions caused by reabsorption on emission spectra and quantum yields. Toluene solutions of 9,10-diphenylanthracence, DPA, with concentrations ranging between 1.12 × 10(-5) and 1.30 × 10(-2) M, were used to validate the proposed methodology. This dye has significant probability of reabsorption and re-emission in concentrated solutions without showing self-quenching or aggregation phenomena. The results indicate that the reabsorption corrections, applied on molecular emission spectra and quantum yields of the samples, accurately reproduce experimental data. A further discussion is performed concerning why the re-emitted radiation is not detected in the experiments, even at the highest DPA concentrations.

  8. Comprehensive multiphase NMR spectroscopy: Basic experimental approaches to differentiate phases in heterogeneous samples

    NASA Astrophysics Data System (ADS)

    Courtier-Murias, Denis; Farooq, Hashim; Masoom, Hussain; Botana, Adolfo; Soong, Ronald; Longstaffe, James G.; Simpson, Myrna J.; Maas, Werner E.; Fey, Michael; Andrew, Brian; Struppe, Jochem; Hutchins, Howard; Krishnamurthy, Sridevi; Kumar, Rajeev; Monette, Martine; Stronks, Henry J.; Hume, Alan; Simpson, André J.

    2012-04-01

    Heterogeneous samples, such as soils, sediments, plants, tissues, foods and organisms, often contain liquid-, gel- and solid-like phases and it is the synergism between these phases that determine their environmental and biological properties. Studying each phase separately can perturb the sample, removing important structural information such as chemical interactions at the gel-solid interface, kinetics across boundaries and conformation in the natural state. In order to overcome these limitations a Comprehensive Multiphase-Nuclear Magnetic Resonance (CMP-NMR) probe has been developed, and is introduced here, that permits all bonds in all phases to be studied and differentiated in whole unaltered natural samples. The CMP-NMR probe is built with high power circuitry, Magic Angle Spinning (MAS), is fitted with a lock channel, pulse field gradients, and is fully susceptibility matched. Consequently, this novel NMR probe has to cover all HR-MAS aspects without compromising power handling to permit the full range of solution-, gel- and solid-state experiments available today. Using this technology, both structures and interactions can be studied independently in each phase as well as transfer/interactions between phases within a heterogeneous sample. This paper outlines some basic experimental approaches using a model heterogeneous multiphase sample containing liquid-, gel- and solid-like components in water, yielding separate 1H and 13C spectra for the different phases. In addition, 19F performance is also addressed. To illustrate the capability of 19F NMR soil samples, containing two different contaminants, are used, demonstrating a preliminary, but real-world application of this technology. This novel NMR approach possesses a great potential for the in situ study of natural samples in their native state.

  9. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    SciTech Connect

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  10. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  11. An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities

    NASA Technical Reports Server (NTRS)

    Gonzales, Andrew A.; Stoker, Carol R.

    2016-01-01

    Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return

  12. An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities.

    PubMed

    Gonzales, Andrew A; Stoker, Carol R

    2016-06-01

    Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science [1]. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return

  13. An efficient approach for Mars Sample Return using emerging commercial capabilities

    NASA Astrophysics Data System (ADS)

    Gonzales, Andrew A.; Stoker, Carol R.

    2016-06-01

    Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science (Squyres, 2011 [1]). This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the

  14. Comparing Stream DOC Fluxes from Sensor- and Sample-Based Approaches

    NASA Astrophysics Data System (ADS)

    Shanley, J. B.; Saraceno, J.; Aulenbach, B. T.; Mast, A.; Clow, D. W.; Hood, K.; Walker, J. F.; Murphy, S. F.; Torres-Sanchez, A.; Aiken, G.; McDowell, W. H.

    2015-12-01

    DOC transport by streamwater is a significant flux that does not consistently show up in ecosystem carbon budgets. In an effort to quantify stream DOC flux, we analyzed three to four years of high-frequency in situ fluorescing dissolved organic matter (FDOM) concentrations and turbidity measured by optical sensors at the five diverse forested and/or alpine headwater sites of the U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) program. FDOM serves as a proxy for DOC. We also took discrete samples over a range of hydrologic conditions, using both manual weekly and automated event-based sampling. After compensating FDOM for temperature effects and turbidity interference - which was successful even at the high-turbidity Luquillo, PR site -- we evaluated the DOC-FDOM relation based on discrete sample DOC analyses matched to corrected FDOM at the time of sampling. FDOM was a moderately robust predictor of DOC, with r2 from 0.60 to more than 0.95 among sites. We then formed continuous DOC time series by two independent approaches: (1) DOC predicted from FDOM; and (2) the composite method, based on modeled DOC from regression on stream discharge, season, air temperature, and time, forcing the model to observations and adjusting modeled concentrations between observations by linearly-interpolated model residuals. DOC flux from each approach was then computed directly as concentration times discharge. DOC fluxes based on the sensor approach were consistently greater than the sample-based approach. At Loch Vale, CO (2.5 years) and Panola Mountain GA (1 year), the difference was 5-17%. At Sleepers River, VT (3 years), preliminary differences were greater than 20%. The difference is driven by the highest events, but we are investigating these results further. We will also present comparisons from Luquillo, PR, and Allequash Creek, WI. The higher sensor-based DOC fluxes could result from their accuracy during hysteresis, which is difficult to model

  15. Image segmentation for uranium isotopic analysis by SIMS: Combined adaptive thresholding and marker controlled watershed approach

    SciTech Connect

    Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.; Zimmer, Mindy M.; Barrett, Christopher A.; Addleman, Raymond S.

    2016-05-31

    A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500 and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.

  16. Quantitative proteomic approach to understand metabolic adaptation in non-small cell lung cancer.

    PubMed

    Martín-Bernabé, Alfonso; Cortés, Roldán; Lehmann, Sylvia G; Seve, Michel; Cascante, Marta; Bourgoin-Voillard, Sandrine

    2014-11-07

    KRAS mutations in non-small cell lung cancer (NSCLC) are a predictor of resistance to EGFR-targeted therapies. Because approaches to target RAS signaling have been unsuccessful, targeting lung cancer metabolism might help to develop a new strategy that could overcome drug resistance in such cancer. In this study, we applied a large screening quantitative proteomic analysis to evidence key enzymes involved in metabolic adaptations in lung cancer. We carried out the proteomic analysis of two KRAS-mutated NSCLC cell lines (A549 and NCI-H460) and a non tumoral bronchial cell line (BEAS-2B) using an iTRAQ (isobaric tags for relative and absolute quantitation) approach combined with two-dimensional fractionation (OFFGEL/RP nanoLC) and MALDI-TOF/TOF mass spectrometry analysis. Protein targets identified by our iTRAQ approach were validated by Western blotting analysis. Among 1038 proteins identified and 834 proteins quantified, 49 and 82 proteins were respectively found differently expressed in A549 and NCI-H460 cells compared to the BEAS-2B non tumoral cell line. Regarding the metabolic pathways, enzymes involved in glycolysis (GAPDH/PKM2/LDH-A/LDH-B) and pentose phosphate pathway (PPP) (G6PD/TKT/6PGD) were up-regulated. The up-regulation of enzyme expression in PPP is correlated to their enzyme activity and will be further investigated to confirm those enzymes as promising metabolic targets for the development of new therapeutic treatments or biomarker assay for NSCLC.

  17. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity

  18. Comparing catchment sediment fingerprinting procedures using an auto-evaluation approach with virtual sample mixtures.

    PubMed

    Palazón, Leticia; Latorre, Borja; Gaspar, Leticia; Blake, William H; Smith, Hugh G; Navas, Ana

    2015-11-01

    Information on sediment sources in river catchments is required for effective sediment control strategies, to understand sediment, nutrient and pollutant transport, and for developing soil erosion management plans. Sediment fingerprinting procedures are employed to quantify sediment source contributions and have become a widely used tool. As fingerprinting procedures are naturally variable and locally dependant, there are different applications of the procedure. Here, the auto-evaluation of different fingerprinting procedures using virtual sample mixtures is proposed to support the selection of the fingerprinting procedure with the best capacity for source discrimination and apportionment. Surface samples from four land uses from a Central Spanish Pyrenean catchment were used i) as sources to generate the virtual sample mixtures and ii) to characterise the sources for the fingerprinting procedures. The auto-evaluation approach involved comparing fingerprinting procedures based on four optimum composite fingerprints selected by three statistical tests, three source characterisations (mean, median and corrected mean) and two types of objective functions for the mixing model. A total of 24 fingerprinting procedures were assessed by this new approach which were solved by Monte Carlo simulations and compared using the root mean squared error (RMSE) between known and assessed source ascriptions for the virtual sample mixtures. It was found that the source ascriptions with the highest accuracy were achieved using the corrected mean source characterisations for the composite fingerprints selected by the Kruskal Wallis H-test and principal components analysis. Based on the RMSE results, high goodness of fit (GOF) values were not always indicative of accurate source apportionment results, and care should be taken when using GOF to assess mixing model performance. The proposed approach to test different fingerprinting procedures using virtual sample mixtures provides an

  19. Salivary Samples for the Diagnosis of Pemphigus vulgaris Using the BIOCHIP Approach: a Pilot Study

    PubMed Central

    RUSSO, IRENE; SAPONERI, ANDREA; MICHELOTTO, ANNA; ALAIBAC, MAURO

    2017-01-01

    Pemphigus vulgaris (PV) is a rare autoimmune intraepithelial blistering skin disease characterized by the presence of circulating autoantibodies against desmoglein 3 (DSG3) and desmoglein 1 (DSG1), resulting in loss of the normal epithelial cell-to-cell adhesion, through a process called acantholysis. In recent years, a BIOCHIP-based indirect immunofluorescence technique for the determination of anti-DSG3 and anti-DSG1 autoantibodies has been described. Even though, the use of saliva anti-DSG3 and anti-DSG1 ELISA for the diagnosis of PV has been already reported, there are no studies concerning the utilization of saliva by the BIOCHIP approach. In the present pilot study, ELISA and BIOCHIP were performed, using salivary and serum samples from the same patients to investigate if the detection of anti-desmoglein autoantibodies in salivary samples by BIOCHIP could be used as a test for the diagnosis of PV. There was a strong correlation between ELISA and BIOCHIP results both for anti-DSG3 and anti-DSG1 serum autoantibodies. Autoantibodies to DSG3 were detected in 8 out of 8 salivary samples by ELISA and in 6 out of 8 salivary samples by the BIOCHIP approach. Autoantibodies to DSG1 were negative in all salivary samples using both ELISA and BIOCHIP. There were no positive results in the negative control group. In conclusion, the results of this pilot study indicate lack of correlation between serum and salivary results using both ELISA and BIOCHIP, indicating that saliva may not be the ideal substrate for the laboratory diagnosis of PV using these approaches. PMID:28064226

  20. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data.

    PubMed

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-07-12

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays.

  1. Adaptive Function in Preschoolers in Relation to Developmental Delay and Diagnosis of Autism Spectrum Disorders: Insights from a Clinical Sample

    ERIC Educational Resources Information Center

    Milne, Susan L.; McDonald, Jenny L.; Comino, Elizabeth J.

    2013-01-01

    This study aims to explore the relationship between developmental ability, autism and adaptive skills in preschoolers. Adaptive function was assessed in 152 preschoolers with autism, with and without developmental delay, and without autism, with and without developmental delay. Their overall adaptive function, measured by the general adaptive…

  2. Systematic approach to conformational sampling for assigning absolute configuration using vibrational circular dichroism.

    PubMed

    Sherer, Edward C; Lee, Claire H; Shpungin, Joseph; Cuff, James F; Da, Chenxiao; Ball, Richard; Bach, Richard; Crespo, Alejandro; Gong, Xiaoyi; Welch, Christopher J

    2014-01-23

    Systematic methods that speed-up the assignment of absolute configuration using vibrational circular dichrosim (VCD) and simplify its usage will advance this technique into a robust platform technology. Applying VCD to pharmaceutically relevant compounds has been handled in an ad hoc fashion, relying on fragment analysis and technical shortcuts to reduce the computational time required. We leverage a large computational infrastructure to provide adequate conformational exploration which enables an accurate assignment of absolute configuration. We describe a systematic approach for rapid calculation of VCD/IR spectra and comparison with corresponding measured spectra and apply this approach to assign the correct stereochemistry of nine test cases. We suggest moving away from the fragment approach when making VCD assignments. In addition to enabling faster and more reliable VCD assignments of absolute configuration, the ability to rapidly explore conformational space and sample conformations of complex molecules will have applicability in other areas of drug discovery.

  3. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-07

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model.

  4. Enhanced contrast separation in scanning electron microscopes via a suspended-thin sample approach.

    PubMed

    Ji, Yuan; Wang, Li; Guo, Zhenxi; Wei, Bin; Zhao, Jie; Wang, Xiaodong; Zhang, Yinqi; Sui, Manling; Han, Xiaodong

    2014-11-01

    A suspended-thin-sample (STS) approach for signal selection and contrast separation is developed in scanning electron microscopes with commonly used primary beam energies and traditional detectors. Topography contrast, electron channeling contrast and composition contrast are separated and largely enhanced from suspended thin samples of several hundred nanometers in thickness, which is less than the escape depth of backscattered electrons. This imaging technique enables to detect relatively pure secondary electron and elastic backscattered electron singles, whereas suppress multiple inelastic scattering effects. The provided contrast features are different from those of bulk samples, which are largely mixed with inelastic scattering effects. The STS imaging concept and method could be expected to have more applications in distinguishing materials of nanostructures, multilayers, compounds and composites, as well as in SEM-based electron backscatter diffraction, cathodoluminesence, and x-ray microanalysis.

  5. Bioassessment Tools for Stony Corals: Monitoring Approaches and Proposed Sampling Plan for the U.S. Virgin Islands

    EPA Science Inventory

    This document describes three general approaches to the design of a sampling plan for biological monitoring of coral reefs. Status assessment, trend detection and targeted monitoring each require a different approach to site selection and statistical analysis. For status assessm...

  6. A comprehensive approach to the determination of two benzimidazoles in environmental samples.

    PubMed

    Wagil, Marta; Maszkowska, Joanna; Białk-Bielińska, Anna; Stepnowski, Piotr; Kumirska, Jolanta

    2015-01-01

    Among the various pharmaceuticals regarded as emerging pollutants, benzimidazoles--represented by flubendazole and fenbendazole--are of particular concern because of their large-scale use in veterinary medicine and their health effects on aquatic organisms. For this reason, it is essential to have reliable analytical methods which can be used to simultaneously monitor their appearance in environmental matrices such as water, sediment and tissue samples. To date, however, such methods relating to these three matrices have not been available. In this paper we present a comprehensive approach to the determination of both drugs in the mentioned above matrices using liquid chromatography-ion trap mass spectrometry (LC-MS/MS). Special attention was paid to the sample preparation step. The optimal extraction methods were further validated by experiments with spiked water, sediment and fish tissue samples. Matrix effects were established. The following absolute recoveries of flubendazole and fenbendazole were achieved: 96.2% and 95.4% from waters, 103.4% and 98.3% from sediments, and 98.3% and 97.6% from fish tissue samples, respectively. Validation of the LC-MS/MS methods enable flubendazole and fenbendazole to be determined with method detection limits: 1.6 ng L(-1) and 1.7 ng L(-1) in water samples; 0.3 ng g(-1) for both compounds in sediment samples, and 3.3 ng g(-1) and 3.5 ng g(-1) in tissue samples, respectively. The proposed methods were successfully used for analysing selected pharmaceuticals in real samples collected in northern Poland. There is first data on the concentration in the environment of the target compounds in Poland.

  7. An Adaptive Management Approach for Summer Water Level Reductions on the Upper Mississippi River System

    USGS Publications Warehouse

    Johnson, B.L.; Barko, J.W.; Clevenstine, R.; Davis, M.; Galat, D.L.; Lubinski, S.J.; Nestler, J.M.

    2010-01-01

    The primary purpose of this report is to provide an adaptive management approach for learning more about summer water level reductions (drawdowns) as a management tool, including where and how drawdowns can be applied most effectively within the Upper Mississippi River System. The report reviews previous drawdowns conducted within the system and provides specific recommendations for learning more about the lesser known effects of drawdowns and how the outcomes can be influenced by different implementation strategies and local conditions. The knowledge gained can be used by managers to determine how best to implement drawdowns in different parts of the UMRS to help achieve management goals. The information and recommendations contained in the report are derived from results of previous drawdown projects, insights from regional disciplinary experts, and the experience of the authors in experimental design, modeling, and monitoring. Modeling is a critical part of adaptive management and can involve conceptual models, simulation models, and empirical models. In this report we present conceptual models that express current understanding regarding functioning of the UMRS as related to drawdowns and highlight interactions among key ecological components of the system. The models were developed within the constraints of drawdown timing, magnitude (depth), and spatial differences in effects (longitudinal and lateral) with attention to ecological processes affected by drawdowns. With input from regional experts we focused on the responses of vegetation, fish, mussels, other invertebrates, and birds. The conceptual models reflect current understanding about relations and interactions among system components, the expected strength of those interactions, potential responses of system components to drawdowns, likelihood of the response occurring, and key uncertainties that limit our ability to make accurate predictions of effects (Table 1, Fig. 4-10). Based on this current

  8. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  9. A Comparison of an Expert Systems Approach to Computerized Adaptive Testing and an Item Response Theory Model.

    ERIC Educational Resources Information Center

    Frick, Theodore W.

    Expert systems can be used to aid decisionmaking. A computerized adaptive test is one kind of expert system, although not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. Two versions of EXSPRT were developed, one with random…

  10. Evaluation of the field-adapted ADMA approach: absolute and relative energies of crambin and derivatives.

    PubMed

    Exner, Thomas E; Mezey, Paul G

    2005-12-21

    A large number of conformations and chemically modified variants of the protein crambin were used to extensively test the field-adapted adjustable density matrix assembler (FA-ADMA) method developed for ab initio quality quantum chemistry computations of proteins and other macromolecules, introduced in an earlier publication. In this method, the fuzzy density matrix fragmentation scheme of the original adjustable density matrix assembler (ADMA) method has been made more efficient by combining it with an approach of using point charges to approximate the effects of additional, distant parts of a given macromolecule in the quantum chemical calculation of each fragment. In this way, smaller parent molecules can be used for fragment generation, while achieving accuracy that can be obtained only with large parent molecules in the original ADMA method. Whereas in both methods the error relative to the Hartree-Fock result can be reduced below any threshold by choosing large enough parent molecules, this can be done more efficiently with the new method. In order to obtain reliable test results for the accuracy obtainable by the new method when compared to conventional Hartree-Fock calculations, we performed a large number of energy calculations for the protein crambin using various conformations available in the Protein Data Bank, various protonation states, and side chain mutations. Additionally, in order to test the performance of the method for protein-solvent interaction studies, the energy changes due to the formation of complexes with ethanol and single and multiple water molecules were investigated.

  11. An adaptable XML based approach for scientific data management and integration

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-03-01

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  12. The role of idiotypic interactions in the adaptive immune system: a belief-propagation approach

    NASA Astrophysics Data System (ADS)

    Bartolucci, Silvia; Mozeika, Alexander; Annibale, Alessia

    2016-08-01

    In this work we use belief-propagation techniques to study the equilibrium behaviour of a minimal model for the immune system comprising interacting T and B clones. We investigate the effect of the so-called idiotypic interactions among complementary B clones on the system’s activation. Our results show that B-B interactions increase the system’s resilience to noise, making clonal activation more stable, while increasing the cross-talk between different clones. We derive analytically the noise level at which a B clone gets activated, in the absence of cross-talk, and find that this increases with the strength of idiotypic interactions and with the number of T cells sending signals to the B clones. We also derive, analytically and numerically, via population dynamics, the critical line where clonal cross-talk arises. Our approach allows us to derive the B clone size distribution, which can be experimentally measured and gives important information about the adaptive immune system response to antigens and vaccination.

  13. An adaptive approach to facilitating research productivity in a primary care clinical department.

    PubMed

    Weber-Main, Anne Marie; Finstad, Deborah A; Center, Bruce A; Bland, Carole J

    2013-07-01

    Efforts to foster the growth of a department's or school's research mission can be informed by known correlates of research productivity, but the specific strategies to be adopted will be highly context-dependent, influenced by local, national, and discipline-specific needs and resources. The authors describe a multifaceted approach-informed by a working model of organizational research productivity-by which the University of Minnesota Department of Family Medicine and Community Health (Twin Cities campus) successfully increased its collective research productivity during a 10-year period (1997-2007) and maintained these increases over time.Facing barriers to recruitment of faculty investigators, the department focused instead on nurturing high-potential investigators among their current faculty via a new, centrally coordinated research program, with provision of training, protected time, technical resources, mentoring, and a scholarly culture to support faculty research productivity. Success of these initiatives is documented by the following: substantial increases in the department's external research funding, rise to a sustained top-five ranking based on National Institutes of Health funding to U.S. family medicine departments, later-stage growth in the faculty's publishing record, increased research capacity among the faculty, and a definitive maturation of the department's research mission. The authors offer their perspectives on three apparent drivers of success with broad applicability-namely, effective leadership, systemic culture change, and the self-awareness to adapt to changes in the local, institutional, and national research environment.

  14. Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach.

    PubMed

    Welfer, Daniel; Scharcanski, Jacob; Kitamura, Cleyson M; Dal Pizzol, Melissa M; Ludwig, Laura W B; Marinho, Diane Ruschel

    2010-02-01

    The identification of some important retinal anatomical regions is a prerequisite for the computer aided diagnosis of several retinal diseases. In this paper, we propose a new adaptive method for the automatic segmentation of the optic disk in digital color fundus images, using mathematical morphology. The proposed method has been designed to be robust under varying illumination and image acquisition conditions, common in eye fundus imaging. Our experimental results based on two publicly available eye fundus image databases are encouraging, and indicate that our approach potentially can achieve a better performance than other known methods proposed in the literature. Using the DRIVE database (which consists of 40 retinal images), our method achieves a success rate of 100% in the correct location of the optic disk, with 41.47% of mean overlap. In the DIARETDB1 database (which consists of 89 retinal images), the optic disk is correctly located in 97.75% of the images, with a mean overlap of 43.65%.

  15. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.

  16. A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications

    NASA Technical Reports Server (NTRS)

    Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco

    2015-01-01

    This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.

  17. Reducing Uncertainty In Ecosystem Structure Inventories From Spaceborne Lidar Using Alternate Spatial Sampling Approaches

    NASA Astrophysics Data System (ADS)

    Lefsky, M. A.; Ramond, T.; Weimer, C. S.

    2010-12-01

    Current and proposed spaceborne lidar sensors sample the land surface using observations along transects in which consecutive observations in the along-track dimension are either contiguous (e.g. VCL, DESDynI, Livex) or spaced (ICESat). These sampling patterns are inefficient because multiple observations are made of a spatially autocorrelated phenomenon (i.e. vegetation patches) while large areas of the landscape are left un-sampled. This results in higher uncertainty in estimates of average ecosystem structure than would be obtained using either random sampling or sampling in regular grids. We compared three sampling scenarios for spaceborne lidar: five transects spaced every 850 m across-track with contiguous 25m footprints along-track, the same number of footprints distributed randomly, and a hybrid approach that retains the central transect of contiguous 25m footprints and distributes the remainder of the footprints into a grid with 178 m spacing. We used simulated ground tracks at four latitudes for a realistic spaceborne lidar mission and calculated the amount of time required to achieve 150 m spacing between transects and the number of near-coincident observations for each scenario. We used four lidar height datasets collected using the Laser Vegetation Imaging Sensor (La Selva, Costa Rica, Sierra Nevada, California, Duke Forest, North Carolina and Harvard Forest, Massachusetts) to calculate the standard error of estimates of landscape height for each scenario. We found that a hybrid sampling approach reduced the amount of time required to reach a transect spacing of 150 m by a factor of three at all four latitudes, and that the number of near-coincident observations was greater by a factor of five at the equator and at least equal throughout the range of latitudes sampled. The standard error of landscape height was between 2 and 2.5 times smaller using either hybrid or random sampling than using transect sampling. As the pulses generated by a spaceborne

  18. Micro-TLC Approach for Fast Screening of Environmental Samples Derived from Surface and Sewage Waters.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Włodarczyk, Elżbieta; Baran, Michał J

    2013-01-01

    In this work we demonstrated analytical capability of micro-planar (micro-TLC) technique comprising one and two-dimensional (2D) separation modes to generate fingerprints of environmental samples originated from sewage and ecosystems waters. We showed that elaborated separation and detection protocols are complementary to previously invented HPLC method based on temperature-dependent inclusion chromatography and UV-DAD detection. Presented 1D and 2D micro-TLC chromatograms of SPE (solid-phase extraction) extracts were optimized for fast and low-cost screening of water samples collected from lakes and rivers located in the area of Middle Pomerania in northern part of Poland. Moreover, we studied highly organic compounds loaded in the treated and untreated sewage waters obtained from municipal wastewater treatment plant "Jamno" near Koszalin City (Poland). Analyzed environmental samples contained number of substances characterized by polarity range from estetrol to progesterone as well as chlorophyll-related dyes previously isolated and pre-purified by simple SPE protocol involving C18 cartridges. Optimization of micro-TLC separation and quantification protocols of such samples were discussed from the practical point of view using simple separation efficiency criteria including total peaks number, log(product ΔhRF), signal intensity and peak asymmetry. Outcomes of the presented analytical approach, especially using detection involving direct fluorescence (UV366/Vis) and phosphomolybdic acid (PMA) visualization are compared with UV-DAD HPLC-generated data reported previously. Chemometric investigation based on principal components analysis revealed that SPE extracts separated by micro-TLC and detected under fluorescence and PMA visualization modes can be used for robust sample fingerprinting even after long-term storage of the extracts (up to 4 years) at subambient temperature (-20 °C). Such approach allows characterization of wide range of sample components that

  19. Computational approach for deriving cancer progression roadmaps from static sample data.

    PubMed

    Sun, Yijun; Yao, Jin; Yang, Le; Chen, Runpu; Nowak, Norma J; Goodison, Steve

    2017-01-20

    As with any biological process, cancer development is inherently dynamic. While major efforts continue to catalog the genomic events associated with human cancer, it remains difficult to interpret and extrapolate the accumulating data to provide insights into the dynamic aspects of the disease. Here, we present a computational strategy that enables the construction of a cancer progression model using static tumor sample data. The developed approach overcame many technical limitations of existing methods. Application of the approach to breast cancer data revealed a linear, branching model with two distinct trajectories for malignant progression. The validity of the constructed model was demonstrated in 27 independent breast cancer data sets, and through visualization of the data in the context of disease progression we were able to identify a number of potentially key molecular events in the advance of breast cancer to malignancy.

  20. A Hierarchical Distance Sampling Approach to Estimating Mortality Rates from Opportunistic Carcass Surveillance Data.

    PubMed

    Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M

    2013-04-01

    Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data.

  1. A Hierarchical Distance Sampling Approach to Estimating Mortality Rates from Opportunistic Carcass Surveillance Data

    PubMed Central

    Bellan, Steve E.; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M.

    2012-01-01

    Summary Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data. PMID:24224079

  2. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    PubMed Central

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  3. Development and climate change: a mainstreaming approach for assessing economic, social, and environmental impacts of adaptation measures.

    PubMed

    Halsnaes, Kirsten; Traerup, Sara

    2009-05-01

    The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.

  4. Prescribed performance synchronization controller design of fractional-order chaotic systems: An adaptive neural network control approach

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Lv, Hui; Jiao, Dongxiu

    2017-03-01

    In this study, an adaptive neural network synchronization (NNS) approach, capable of guaranteeing prescribed performance (PP), is designed for non-identical fractional-order chaotic systems (FOCSs). For PP synchronization, we mean that the synchronization error converges to an arbitrary small region of the origin with convergence rate greater than some function given in advance. Neural networks are utilized to estimate unknown nonlinear functions in the closed-loop system. Based on the integer-order Lyapunov stability theorem, a fractional-order adaptive NNS controller is designed, and the PP can be guaranteed. Finally, simulation results are presented to confirm our results.

  5. Adaptive Link Generation for Multiperspective Thinking on the Web: An Approach to Motivate Learners to Think

    ERIC Educational Resources Information Center

    Mitsuhara, Hiroyuki; Kanenishi, Kazuhide; Yano, Yoneo

    2006-01-01

    To increase the efficiency of exploratory learning on the Web, we previously developed a free-hyperlink environment that allows adaptive link generation. In this environment, learners can make new hyperlinks independent of static hyperlinks and share them on the Web. To reduce hyperlink overflow, the adaptive link generation filters out sharable…

  6. An Open IMS-Based User Modelling Approach for Developing Adaptive Learning Management Systems

    ERIC Educational Resources Information Center

    Boticario, Jesus G.; Santos, Olga C.

    2007-01-01

    Adaptive LMS have not yet reached the eLearning marketplace due to methodological, technological and management open issues. At aDeNu group, we have been working on two key challenges for the last five years in related research projects. Firstly, develop the general framework and a running architecture to support the adaptive life cycle (i.e.,…

  7. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  8. Developing an Instructional Material Using a Concept Cartoon Adapted to the 5E Model: A Sample of Teaching Erosion

    ERIC Educational Resources Information Center

    Birisci, Salih; Metin, Mustafa

    2010-01-01

    Using different instructional materials adapted within the constructivist learning theory will enhance students' conceptual understanding. From this point of view, an instructional instrument using a concept cartoon adapted with 5E model has developed and introduced in this study. The study has some deficiencies in investigating students'…

  9. Diagnosing Intellectual Disability in a Forensic Sample: Gender and Age Effects on the Relationship between Cognitive and Adaptive Functioning

    ERIC Educational Resources Information Center

    Hayes, Susan C.

    2005-01-01

    Background: The relationship between adaptive behaviour and cognitive functioning in offenders with intellectual disabilities is not well researched. This study aims to examine gender and age effects on the relationship between these two areas of functioning. Method: The "Vineland Adaptive Behavior Scales" (VABS) and the "Kaufman…

  10. Adaptive evolution of chloroplast genome structure inferred using a parametric bootstrap approach

    PubMed Central

    Cui, Liying; Leebens-Mack, Jim; Wang, Li-San; Tang, Jijun; Rymarquis, Linda; Stern, David B; dePamphilis, Claude W

    2006-01-01

    Background Genome rearrangements influence gene order and configuration of gene clusters in all genomes. Most land plant chloroplast DNAs (cpDNAs) share a highly conserved gene content and with notable exceptions, a largely co-linear gene order. Conserved gene orders may reflect a slow intrinsic rate of neutral chromosomal rearrangements, or selective constraint. It is unknown to what extent observed changes in gene order are random or adaptive. We investigate the influence of natural selection on gene order in association with increased rate of chromosomal rearrangement. We use a novel parametric bootstrap approach to test if directional selection is responsible for the clustering of functionally related genes observed in the highly rearranged chloroplast genome of the unicellular green alga Chlamydomonas reinhardtii, relative to ancestral chloroplast genomes. Results Ancestral gene orders were inferred and then subjected to simulated rearrangement events under the random breakage model with varying ratios of inversions and transpositions. We found that adjacent chloroplast genes in C. reinhardtii were located on the same strand much more frequently than in simulated genomes that were generated under a random rearrangement processes (increased sidedness; p < 0.0001). In addition, functionally related genes were found to be more clustered than those evolved under random rearrangements (p < 0.0001). We report evidence of co-transcription of neighboring genes, which may be responsible for the observed gene clusters in C. reinhardtii cpDNA. Conclusion Simulations and experimental evidence suggest that both selective maintenance and directional selection for gene clusters are determinants of chloroplast gene order. PMID:16469102

  11. A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET.

    PubMed

    Hatt, Mathieu; Cheze le Rest, Catherine; Turzo, Alexandre; Roux, Christian; Visvikis, Dimitris

    2009-06-01

    Accurate volume estimation in positron emission tomography (PET) is crucial for different oncology applications. The objective of our study was to develop a new fuzzy locally adaptive Bayesian (FLAB) segmentation for automatic lesion volume delineation. FLAB was compared with a threshold approach as well as the previously proposed fuzzy hidden Markov chains (FHMC) and the fuzzy C-Means (FCM) algorithms. The performance of the algorithms was assessed on acquired datasets of the IEC phantom, covering a range of spherical lesion sizes (10-37 mm), contrast ratios (4:1 and 8:1), noise levels (1, 2, and 5 min acquisitions), and voxel sizes (8 and 64 mm(3)). In addition, the performance of the FLAB model was assessed on realistic nonuniform and nonspherical volumes simulated from patient lesions. Results show that FLAB performs better than the other methodologies, particularly for smaller objects. The volume error was 5%-15% for the different sphere sizes (down to 13 mm), contrast and image qualities considered, with a high reproducibility (variation < 4%). By comparison, the thresholding results were greatly dependent on image contrast and noise, whereas FCM results were less dependent on noise but consistently failed to segment lesions < 2 cm. In addition, FLAB performed consistently better for lesions < 2 cm in comparison to the FHMC algorithm. Finally the FLAB model provided errors less than 10% for nonspherical lesions with inhomogeneous activity distributions. Future developments will concentrate on an extension of FLAB in order to allow the segmentation of separate activity distribution regions within the same functional volume as well as a robustness study with respect to different scanners and reconstruction algorithms.

  12. Analytical bias among different gas chromatographic approaches using standard BTX gases and exhaust samples.

    PubMed

    Kim, Ki-Hyun; Pandey, Sudhir Kumar; Pal, Raktim

    2009-02-01

    In this study, the analytical compatibility of the gas chromatographic (GC) approach was evaluated through a cross-calibration exercise. To this end, three aromatic volatile organic compounds (VOCs: benzene, toluene, and p-xylene (BTX)) were simultaneously analyzed with four individual instrumental setups (type I = GC with MS plus solid phase microextraction (SPME) method, II = GC with flame ionization detection (FID) plus SPME, III = fast GC-FID plus SPME, and IV = GC-FID plus air server/thermal desorption (AS/TD) method). A comparison of basic quality assurance (QA) data revealed considerable differences in DL values among the methods with moderate variabilities in the intercompound sensitivity. In light of the differences in detection properties, the analytical bias involved for each methodological approach was assessed by the relative relationship between analytes and basic operating conditions. The results suggest that the analysis of environmental samples at ultra-low concentration levels (at or below ppb level) can be subject to diverse sources of bias. Although detection properties of target compounds seem to be affected by the combined effects of various factors, changes in the sample concentration levels were seen to be the most consistent under the experimental setups analyzed in this study.

  13. Culturally adapted pictorial screening tool for autism spectrum disorder: A new approach

    PubMed Central

    Perera, Hemamali; Jeewandara, Kamal Chandima; Seneviratne, Sudarshi; Guruge, Chandima

    2017-01-01

    AIM To assess the performance of a newly designed, culturally adapted screening tool for autism spectrum disorder (ASD). METHODS Items for the screening tool were modeled from already documented checklists and diagnostic criteria for ASD. Each item in text was paired with a photograph that illustrated the written content, which was in the 2 main local languages. The final product had 21 items and was named the pictorial autism assessment schedule (PAAS). Performance of PAAS was tested on a clinical sample of 18-48 mo old children, diagnosis naïve, presenting with developmental deficits. Mothers completed PAAS checklist. Based on clinical diagnosis, which was taken as the gold standard, children were later grouped into ASD (Group 1) and non-ASD developmental disorders (Group 2). Mothers of a control sample of typically developing children also completed PAAS (Group 3). RESULTS A total of 105 children (Group 1-45, Group 2-30, Group 3-30) participated in the study. Mean age of Group 1 and Group 2 were 36 and 40 mo respectively. Majority were male in all 3 groups. Performance of PAAS in discriminating between ASD and non-ASD developmental disorders was sensitivity 88.8%, specificity 60.7%, positive predictive value (PPV) 78.4%, negative predictive value (NPV) 77.2%, likelihood ratio (LR+) 2.26, and LR- 0.18. Performance of PAAS in discriminating between ASD and typical development was sensitivity 88.0%, specificity 93.3%, PPV 95.2%, NPV 84.0%, LR+ 13.3 and LR- 0.12. The results indicated that that a positive result from PAAS was 2.26 times more likely to be found in a child with ASD than in a child with non-ASD developmental disorder. A positive result from PAAS was 13.3 times more likely to be found in a child with ASD than in a child with typical development. CONCLUSION PAAS is an effective tool in screening for ASD. Further study is indicated to evaluate the feasibility of using this instrument for community screening for ASD. PMID:28224095

  14. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

    PubMed Central

    Storz, Jay F.; Wheat, Christopher W.

    2010-01-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

  15. An adaptive Lagrangian boundary element approach for three-dimensional transient free-surface Stokes flow as applied to extrusion, thermoforming, and rheometry

    NASA Astrophysics Data System (ADS)

    Khayat, Roger E.; Genouvrier, Delphine

    2001-05-01

    An adaptive (Lagrangian) boundary element approach is proposed for the general three-dimensional simulation of confined free-surface Stokes flow. The method is stable as it includes remeshing capabilities of the deforming free surface and thus can handle large deformations. A simple algorithm is developed for mesh refinement of the deforming free-surface mesh. Smooth transition between large and small elements is achieved without significant degradation of the aspect ratio of the elements in the mesh. Several flow problems are presented to illustrate the utility of the approach, particularly as encountered in polymer processing and rheology. These problems illustrate the transient nature of the flow during the processes of extrusion and thermoforming, the elongation of a fluid sample in an extensional rheometer, and the coating of a sphere. Surface tension effects are also explored. Copyright

  16. Ball-and-Stick Local Elevation Umbrella Sampling: Molecular Simulations Involving Enhanced Sampling within Conformational or Alchemical Subspaces of Low Internal Dimensionalities, Minimal Irrelevant Volumes, and Problem-Adapted Geometries.

    PubMed

    Hansen, Halvor S; Hünenberger, Philippe H

    2010-09-14

    A new method, ball-and-stick local elevation umbrella sampling (B&S-LEUS), is proposed to enhance the sampling in computer simulations of (bio)molecular systems. It enables the calculation of conformational free-energy differences between states (or alchemical free-energy differences between molecules), even in situations where the definition of these states relies on a conformational subspace involving more than a few degrees of freedom. The B&S-LEUS method consists of the following steps: (A) choice of a reduced conformational subspace; (B) representation of the relevant states by means of spheres ("balls"), each associated with a biasing potential involving a one-dimensional radial memory-based term and a radial confinement term; (C) definition of a set of lines ("sticks") connecting these spheres, each associated with a biasing potential involving a one-dimensional longitudinal memory-based term and a transverse confinement term; (D) unification of the biasing potentials corresponding to the union of all of the spheres and lines (active subspace) into a single biasing potential according to the enveloping distribution sampling (EDS) scheme; (E) build-up of the memory using the local elevation (LE) procedure, leading to a biasing potential enabling a nearly uniform sampling (radially within the spheres, longitudinally within the lines) of the active subspace; (F) generation of a biased ensemble of configurations using this preoptimized biasing potential, following an umbrella sampling (US) approach; and (G) calculation of the relative free energies of the states via reweighting and state assignment. The main characteristics of this approach are: (i) a low internal dimensionality, that is, the memory only involves one-dimensional grids (acceptable memory requirements); (ii) a minimal irrelevant volume, that is, the conformational volume opened to sampling includes a minimal fraction of irrelevant regions in terms of the free energy of the physical system or of

  17. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    PubMed

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  18. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  19. Adaptive Modulation Approach for Robust MPEG-4 AAC Encoded Audio Transmission

    DTIC Science & Technology

    2011-11-01

    to switch to a higher source rate at a given channel bandwidth, which is not possible using single (non-adaptive) modulation, such as 4- QAM for all...case of QPSK/4- QAM , again at high Eb/No (negligible BER), the source rate can be switched to 128kbps (ignoring other transmission overhead) thus...adaptive scheme uses the 4- QAM modulation, whereas the adaptive modulation scheme employs the 4, 8, and 16 QAM for ESC1, ESC2 and ESC3, respectively

  20. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.

  1. Defining adaptation measures collaboratively: A participatory approach in the Doñana socio-ecological system, Spain.

    PubMed

    De Stefano, Lucia; Hernández-Mora, Nuria; Iglesias, Ana; Sánchez, Berta

    2016-11-08

    The uncertainty associated with the definition of strategies for climate change adaptation poses a challenge that cannot be faced by science alone. We present a participatory experience where, instead of having science defining solutions and eliciting stakeholders' feedback, local actors actually drove the process. While principles and methods of the approach are easily adaptable to different local contexts, this paper shows the contribution of participatory dynamics to the design of adaptation measures in the biodiversity-rich socio-ecological region surrounding the Doñana wetlands (Southern Spain). During the process, stakeholders and scientists collaboratively designed a common scenario for the future in which to define and assess a portfolio of potential adaptation measures, and found a safe, informal space for open dialogue and information exchange. Through this dialogue, points of connection among local actors emerged around the need for more integrated, transparent design of adaptation measures; for strengthening local capacity; and for strategies to diversify economic activities in order to increase the resilience of the region.

  2. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  3. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    SciTech Connect

    Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  4. Ethics and law in research with human biological samples: a new approach.

    PubMed

    Petrini, Carlo

    2014-01-01

    During the last century a large number of documents (regulations, ethical codes, treatises, declarations, conventions) were published on the subject of ethics and clinical trials, many of them focusing on the protection of research participants. More recently various proposals have been put forward to relax some of the constraints imposed on research by these documents and regulations. It is important to distinguish between risks deriving from direct interventions on human subjects and other types of risk. In Italy the Data Protection Authority has acted in the question of research using previously collected health data and biological samples to simplify the procedures regarding informed consent. The new approach may be of help to other researchers working outside Italy.

  5. Determination of avermectins: a QuEChERS approach to the analysis of food samples.

    PubMed

    Rúbies, A; Antkowiak, S; Granados, M; Companyó, R; Centrich, F

    2015-08-15

    We present a simple method for extracting avermectines from meat, based on a QuEChERS approach followed by liquid chromatography (LC) coupled to triple quadrupole (QqQ) tandem mass spectrometry (MS/MS). The compounds considered are ivermectin, abamectin, emamectin, eprinomectin, doramectin and moxidectin. The new method has been fully validated according to the requirements of European Decision 657/2002/CE (EU, 2002). The method is suitable for the analysis of avermectins at concentration as low as 2.5 μg kg(-1), and allows high sample throughput. In addition, the detection of avermectins by high resolution mass spectrometry using a quadrupole-Orbritrap (Q-Orbitrap) hybrid instrument has been explored, and the target Selected Ion Monitoring data dependent MS/MS (t-SIM-dd MS/MS) mode has been found to provide excellent performance for residue determination of target analytes.

  6. Developing Coastal Adaptation to Climate Change in the New York City Infrastructure-Shed: Process, Approach, Tools, and Strategies

    NASA Technical Reports Server (NTRS)

    Rosenzweig, Cynthia; Solecki, William D.; Blake, Reginald; Bowman, Malcolm; Faris, Craig; Gornitz, Vivien; Horton, Radley; Jacob, Klaus; LeBlanc, Alice; Leichenko, Robin; Linkin, Megan; Major, David; O'Grady, Megan; Patrick, Lesley; Sussman, Edna; Yohe, Gary; Zimmerman, Rae

    2010-01-01

    While current rates of sea level rise and associated coastal flooding in the New York City region appear to be manageable by stakeholders responsible for communications, energy, transportation, and water infrastructure, projections for sea level rise and associated flooding in the future, especially those associated with rapid icemelt of the Greenland and West Antarctic Icesheets, may be beyond the range of current capacity because an extreme event might cause flooding and inundation beyond the planning and preparedness regimes. This paper describes the comprehensive process, approach, and tools developed by the New York City Panel on Climate Change (NPCC) in conjunction with the region s stakeholders who manage its critical infrastructure, much of which lies near the coast. It presents the adaptation approach and the sea-level rise and storm projections related to coastal risks developed through the stakeholder process. Climate change adaptation planning in New York City is characterized by a multi-jurisdictional stakeholder-scientist process, state-of-the-art scientific projections and mapping, and development of adaptation strategies based on a risk-management approach.

  7. A novel four-dimensional analytical approach for analysis of complex samples.

    PubMed

    Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J

    2016-05-01

    A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation.

  8. Approaching Ultimate Intrinsic SNR in a Uniform Spherical Sample with Finite Arrays of Loop Coils

    PubMed Central

    Vaidya, Manushka V.; Sodickson, Daniel K.; Lattanzi, Riccardo

    2015-01-01

    We investigated to what degree and at what rate the ultimate intrinsic (UI) signal-to-noise ratio (SNR) may be approached using finite radiofrequency detector arrays. We used full-wave electromagnetic field simulations based on dyadic Green’s functions to compare the SNR of arrays of loops surrounding a uniform sphere with the ultimate intrinsic SNR (UISNR), for increasing numbers of elements over a range of magnetic field strengths, voxel positions, sphere sizes, and acceleration factors. We evaluated the effect of coil conductor losses and the performance of a variety of distinct geometrical arrangements such as “helmet” and “open-pole” configurations in multiple imaging planes. Our results indicate that UISNR at the center is rapidly approached with encircling arrays and performance is substantially lower near the surface, where a quadrature detection configuration tailored to voxel position is optimal. Coil noise is negligible at high field, where sample noise dominates. Central SNR for practical array configurations such as the helmet is similar to that of close-packed arrangements. The observed trends can provide physical insights to improve coil design. PMID:26097442

  9. Assessment of Sampling Approaches for Remote Sensing Image Classification in the Iranian Playa Margins

    NASA Astrophysics Data System (ADS)

    Kazem Alavipanah, Seyed

    There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing

  10. TRACER. A new approach to comparative modeling that combines threading with free-space conformational sampling.

    PubMed

    Trojanowski, Sebastian; Rutkowska, Aleksandra; Kolinski, Andrzej

    2010-01-01

    A new approach to comparative modeling of proteins, TRACER, is described and benchmarked against classical modeling procedures. The new method unifies true three-dimensional threading with coarse-grained sampling of query protein conformational space. The initial sequence alignment of a query protein with a template is not required, although a template needs to be somehow identified. The template is used as a multi-featured fuzzy three-dimensional scaffold. The conformational search for the query protein is guided by intrinsic force field of the coarse-grained modeling engine CABS and by compatibility with the template scaffold. During Replica Exchange Monte Carlo simulations the model chain representing the query protein finds the best possible structural alignment with the template chain, that also optimizes the intra-protein interactions as approximated by the knowledge based force field of CABS. The benchmark done for a representative set of query/template pairs of various degrees of sequence similarity showed that the new method allows meaningful comparative modeling also for the region of marginal, or non-existing, sequence similarity. Thus, the new approach significantly extends the applicability of comparative modeling.

  11. A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples

    PubMed Central

    Licier, Rígel; Miranda, Eric; Serrano, Horacio

    2016-01-01

    The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine. PMID:28248241

  12. A linear sampling approach to inverse elastic scattering in piecewise-homogeneous domains

    NASA Astrophysics Data System (ADS)

    Guzina, Bojan B.; Madyarov, Andrew I.

    2007-08-01

    The focus of this study is a 3D inverse scattering problem underlying non-invasive reconstruction of piecewise-homogeneous (PH) defects in a layered semi-infinite solid from near-field, surface elastic waveforms. The solution approach revolves around the use of Green's function for the layered reference domain and a generalization of the linear sampling method to deal with the featured class of PH configurations. For a rigorous treatment of the full-waveform integral equation that is used as a basis for obstacle reconstruction, the developments include an extension of the Holmgren's uniqueness theorem to piecewise-homogeneous domains and an in-depth analysis of the situation when the sampling point is outside the support of the obstacle that employs the method of topological sensitivity. Owing to the ill-posed nature of the featured integral equation, a stable approximate solution is sought via Tikhonov regularization. A set of numerical examples is included to demonstrate the feasibility of 3D obstacle reconstruction when the defects are buried in a multi-layered elastic solid.

  13. Matrix compatible solid phase microextraction coating, a greener approach to sample preparation in vegetable matrices.

    PubMed

    Naccarato, Attilio; Pawliszyn, Janusz

    2016-09-01

    This work proposes the novel PDMS/DVB/PDMS fiber as a greener strategy for analysis by direct immersion solid phase microextraction (SPME) in vegetables. SPME is an established sample preparation approach that has not yet been adequately explored for food analysis in direct immersion mode due to the limitations of the available commercial coatings. The robustness and endurance of this new coating were investigated by direct immersion extractions in raw blended vegetables without any further sample preparation steps. The PDMS/DVB/PDMS coating exhibited superior features related to the capability of the external PDMS layer to protect the commercial coating, and showed improvements in terms of extraction capability and in the cleanability of the coating surface. In addition to having contributed to the recognition of the superior features of this new fiber concept before commercialization, the outcomes of this work serve to confirm advancements in the matrix compatibility of the PDMS-modified fiber, and open new prospects for the development of greener high-throughput analytical methods in food analysis using solid phase microextraction in the near future.

  14. A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples.

    PubMed

    Licier, Rígel; Miranda, Eric; Serrano, Horacio

    2016-10-17

    The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.

  15. A new insert sample approach to paper spray mass spectrometry: a paper substrate with paraffin barriers.

    PubMed

    Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G

    2016-03-07

    The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples.

  16. Sample multiplexing with cysteine-selective approaches: cysDML and cPILOT.

    PubMed

    Gu, Liqing; Evans, Adam R; Robinson, Renã A S

    2015-04-01

    Cysteine-selective proteomics approaches simplify complex protein mixtures and improve the chance of detecting low abundant proteins. It is possible that cysteinyl-peptide/protein enrichment methods could be coupled to isotopic labeling and isobaric tagging methods for quantitative proteomics analyses in as few as two or up to 10 samples, respectively. Here we present two novel cysteine-selective proteomics approaches: cysteine-selective dimethyl labeling (cysDML) and cysteine-selective combined precursor isotopic labeling and isobaric tagging (cPILOT). CysDML is a duplex precursor quantification technique that couples cysteinyl-peptide enrichment with on-resin stable-isotope dimethyl labeling. Cysteine-selective cPILOT is a novel 12-plex workflow based on cysteinyl-peptide enrichment, on-resin stable-isotope dimethyl labeling, and iodoTMT tagging on cysteine residues. To demonstrate the broad applicability of the approaches, we applied cysDML and cPILOT methods to liver tissues from an Alzheimer's disease (AD) mouse model and wild-type (WT) controls. From the cysDML experiments, an average of 850 proteins were identified and 594 were quantified, whereas from the cPILOT experiment, 330 and 151 proteins were identified and quantified, respectively. Overall, 2259 unique total proteins were detected from both cysDML and cPILOT experiments. There is tremendous overlap in the proteins identified and quantified between both experiments, and many proteins have AD/WT fold-change values that are within ~20% error. A total of 65 statistically significant proteins are differentially expressed in the liver proteome of AD mice relative to WT. The performance of cysDML and cPILOT are demonstrated and advantages and limitations of using multiple duplex experiments versus a single 12-plex experiment are highlighted.

  17. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2016-10-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  18. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  19. Sample Multiplexing with Cysteine-Selective Approaches: cysDML and cPILOT

    NASA Astrophysics Data System (ADS)

    Gu, Liqing; Evans, Adam R.; Robinson, Renã A. S.

    2015-04-01

    Cysteine-selective proteomics approaches simplify complex protein mixtures and improve the chance of detecting low abundant proteins. It is possible that cysteinyl-peptide/protein enrichment methods could be coupled to isotopic labeling and isobaric tagging methods for quantitative proteomics analyses in as few as two or up to 10 samples, respectively. Here we present two novel cysteine-selective proteomics approaches: cysteine-selective dimethyl labeling (cysDML) and cysteine-selective combined precursor isotopic labeling and isobaric tagging (cPILOT). CysDML is a duplex precursor quantification technique that couples cysteinyl-peptide enrichment with on-resin stable-isotope dimethyl labeling. Cysteine-selective cPILOT is a novel 12-plex workflow based on cysteinyl-peptide enrichment, on-resin stable-isotope dimethyl labeling, and iodoTMT tagging on cysteine residues. To demonstrate the broad applicability of the approaches, we applied cysDML and cPILOT methods to liver tissues from an Alzheimer's disease (AD) mouse model and wild-type (WT) controls. From the cysDML experiments, an average of 850 proteins were identified and 594 were quantified, whereas from the cPILOT experiment, 330 and 151 proteins were identified and quantified, respectively. Overall, 2259 unique total proteins were detected from both cysDML and cPILOT experiments. There is tremendous overlap in the proteins identified and quantified between both experiments, and many proteins have AD/WT fold-change values that are within ~20% error. A total of 65 statistically significant proteins are differentially expressed in the liver proteome of AD mice relative to WT. The performance of cysDML and cPILOT are demonstrated and advantages and limitations of using multiple duplex experiments versus a single 12-plex experiment are highlighted.

  20. The Development of the Alpha-Omega Completed Sentence Form (AOCSF): An Instrument to Aid in the Measurement, Identification, and Assessment of an Individual's "Adaptational Approach(es)" to the Stressful Event of Death and Related Issues.

    ERIC Educational Resources Information Center

    Klein, Ronald; And Others

    The Alpha Omega Completed Sentence Form (AOCSF) was developed to identify and measure a person's adaptational approaches to information concerning their own death or the possible death of a significant other. In contrast to the Kubler-Ross stage theory, the adaptational approach recognizes a person's capacity to assimilate new information which…

  1. Scheduled oil sampling: A proactive approach towards pollution prevention and waste minimization

    SciTech Connect

    Reece, C.; Zirker, L.

    1995-11-01

    The Waste Reduction Operations Complex (WROC) at the Idaho National Engineering Laboratory (INEL) maintains an emergency fire protection system which provides fire water during emergency conditions. The diesel engine driving this system receives regular preventative maintenance (PM) and servicing. The Waste Minimization Plan for WROC requires that all systems and processes be given a regular assessment to verify any Pollution Prevention (P2) or Waste Minimization (Waste Min.) activities. The WROC Maintenance group has implemented a proactive or best management practice (BMP) that reflects this P2/Waste Min. awareness. The diesel engine is operated for 30 minutes each week to maintain its readiness. A typical owner`s manual for industrial engines require that the oil be changed every 100-hours of operation or 6-months; only 13-hours of operation occur during the 6-months before the required oil change. Thirteen hours of operation would not warrant changing the oil. The WROC proactive approach to this problem is to perform an annual Scheduled Oil Sampling (SOS). An 8-ounce sample of oil is obtained and sent to a SOS lab. The SOS lab analyzes the condition (breakdown) of the oil and, provides a detailed analysis of metal particulates (from engine wear), and checks for impurities, such as, sulphur, water, coolant, and fuel in the system. The oil is changed only when the sampling results warrant that an oil change is necessary. The actual costs of the oil, filters, and labor far exceed the costs of performing the SOS. The projected cost savings after 8 years is about $12,000 in labor, oil changing costs, and hazardous waste analysis.

  2. A Bayesian cost-benefit approach to the determination of sample size in clinical trials.

    PubMed

    Kikuchi, Takashi; Pezeshk, Hamid; Gittins, John

    2008-01-15

    Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of the treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the cost of the trial and the possible benefits from using the new treatment, and fail to consider the important fact that the number of users depends on the evidence for improvement compared with the current treatment. Our approach, Behavioural Bayes (or BeBay for short), assumes that the number of patients switching to the new medical treatment depends on the strength of the evidence that is provided by clinical trials, and takes a value between zero and the number of potential patients. The better a new treatment, the more the number of patients who want to switch to it and the more the benefit is obtained. We define the optimal sample size to be the sample size that maximizes the expected net benefit resulting from a clinical trial. Gittins and Pezeshk (Drug Inf. Control 2000; 34:355-363; The Statistician 2000; 49(2):177-187) used a simple form of benefit function and assumed paired comparisons between two medical treatments and that the variance of the treatment effect is known. We generalize this setting, by introducing a logistic benefit function, and by extending the more usual unpaired case, without assuming the variance to be known.

  3. Methodologies for the extraction of phenolic compounds from environmental samples: new approaches.

    PubMed

    Mahugo Santana, Cristina; Sosa Ferrera, Zoraida; Esther Torres Padrón, M; Juan Santana Rodríguez, José

    2009-01-09

    Phenolic derivatives are among the most important contaminants present in the environment. These compounds are used in several industrial processes to manufacture chemicals such as pesticides, explosives, drugs and dyes. They also are used in the bleaching process of paper manufacturing. Apart from these sources, phenolic compounds have substantial applications in agriculture as herbicides, insecticides and fungicides. However, phenolic compounds are not only generated by human activity, but they are also formed naturally, e.g., during the decomposition of leaves or wood. As a result of these applications, they are found in soils and sediments and this often leads to wastewater and ground water contamination. Owing to their high toxicity and persistence in the environment, both, the US Environmental Protection Agency (EPA) and the European Union have included some of them in their lists of priority pollutants. Current standard methods of phenolic compounds analysis in water samples are based on liquid-liquid extraction (LLE) while Soxhlet extraction is the most used technique for isolating phenols from solid matrices. However, these techniques require extensive cleanup procedures that are time-intensive and involve expensive and hazardous organic solvents, which are undesirable for health and disposal reasons. In the last years, the use of news methodologies such as solid-phase extraction (SPE) and solid-phase microextraction (SPME) have increased for the extraction of phenolic compounds from liquid samples. In the case of solid samples, microwave assisted extraction (MAE) is demonstrated to be an efficient technique for the extraction of these compounds. In this work we review the developed methods in the extraction and determination of phenolic derivatives in different types of environmental matrices such as water, sediments and soils. Moreover, we present the new approach in the use of micellar media coupled with SPME process for the extraction of phenolic

  4. Staphylococcus aureus surface proteins involved in adaptation to oxacillin identified using a novel cell shaving approach.

    PubMed

    Solis, Nestor; Parker, Benjamin L; Kwong, Stephen M; Robinson, Gareth; Firth, Neville; Cordwell, Stuart J

    2014-06-06

    Staphylococcus aureus is a Gram-positive pathogen responsible for a variety of infections, and some strains are resistant to virtually all classes of antibiotics. Cell shaving proteomics using a novel probability scoring algorithm to compare the surfaceomes of the methicillin-resistant, laboratory-adapted S. aureus COL strain with a COL strain in vitro adapted to high levels of oxacillin (APT). APT displayed altered cell morphology compared with COL and increased aggregation in biofilm assays. Increased resistance to β-lactam antibiotics was observed, but adaptation to oxacillin did not confer multidrug resistance. Analysis of the S. aureus COL and APT surfaceomes identified 150 proteins at a threshold determined by the scoring algorithm. Proteins unique to APT included the LytR-CpsA-Psr (LCP) domain-containing MsrR and SACOL2302. Quantitative RT-PCR showed increased expression of sacol2302 in APT grown with oxacillin (>6-fold compared with COL). Overexpression of sacol2302 in COL to levels consistent with APT (+ oxacillin) did not influence biofilm formation or β-lactam resistance. Proteomics using iTRAQ and LC-MS/MS identified 1323 proteins (∼50% of the theoretical S. aureus proteome), and cluster analysis demonstrated elevated APT abundances of LCP proteins, capsule and peptidoglycan biosynthesis proteins, and proteins involved in wall remodelling. Adaptation to oxacillin also induced urease proteins, which maintained culture pH compared to COL. These results show that S. aureus modifies surface architecture in response to antibiotic adaptation.

  5. Optimal Unified Approach for Rare-Variant Association Testing with Application to Small-Sample Case-Control Whole-Exome Sequencing Studies

    PubMed Central

    Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong

    2012-01-01

    We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193

  6. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    NASA Astrophysics Data System (ADS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  7. Approaching sign language test construction: adaptation of the German sign language receptive skills test.

    PubMed

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization.

  8. Evaluation of Online/Offline Image Guidance/Adaptation Approaches for Prostate Cancer Radiation Therapy

    SciTech Connect

    Qin, An; Sun, Ying; Liang, Jian; Yan, Di

    2015-04-01

    Purpose: To evaluate online/offline image-guided/adaptive treatment techniques for prostate cancer radiation therapy with daily cone-beam CT (CBCT) imaging. Methods and Materials: Three treatment techniques were evaluated retrospectively using daily pre- and posttreatment CBCT images on 22 prostate cancer patients. Prostate, seminal vesicles (SV), rectal wall, and bladder were delineated on all CBCT images. For each patient, a pretreatment intensity modulated radiation therapy plan with clinical target volume (CTV) = prostate + SV and planning target volume (PTV) = CTV + 3 mm was created. The 3 treatment techniques were as follows: (1) Daily Correction: The pretreatment intensity modulated radiation therapy plan was delivered after online CBCT imaging, and position correction; (2) Online Planning: Daily online inverse plans with 3-mm CTV-to-PTV margin were created using online CBCT images, and delivered; and (3) Hybrid Adaption: Daily Correction plus an offline adaptive inverse planning performed after the first week of treatment. The adaptive plan was delivered for all remaining 15 fractions. Treatment dose for each technique was constructed using the daily posttreatment CBCT images via deformable image registration. Evaluation was performed using treatment dose distribution in target and critical organs. Results: Treatment equivalent uniform dose (EUD) for the CTV was within [85.6%, 100.8%] of the pretreatment planned target EUD for Daily Correction; [98.7%, 103.0%] for Online Planning; and [99.2%, 103.4%] for Hybrid Adaptation. Eighteen percent of the 22 patients in Daily Correction had a target dose deficiency >5%. For rectal wall, the mean ± SD of the normalized EUD was 102.6% ± 2.7% for Daily Correction, 99.9% ± 2.5% for Online Planning, and 100.6% ± 2.1% for Hybrid Adaptation. The mean ± SD of the normalized bladder EUD was 108.7% ± 8.2% for Daily Correction, 92.7% ± 8.6% for Online Planning, and 89.4% ± 10.8% for Hybrid

  9. Testing Set-Point Theory in a Swiss National Sample: Reaction and Adaptation to Major Life Events

    PubMed Central

    Anusic, Ivana; Yap, Stevie C. Y.; Lucas, Richard E.

    2014-01-01

    Set-point theory posits that individuals react to the experience of major life events, but quickly adapt back to pre-event baseline levels of subjective well-being in the years following the event. A large, nationally representative panel study of Swiss households was used to examine set-point theory by investigating the extent of adaptation following the experience of marriage, childbirth, widowhood, unemployment, and disability. Our results demonstrate that major life events are associated with marked change in life satisfaction and, for some events (e.g., marriage, disability), these changes are relatively long lasting even when accounting for normative, age related change. PMID:25419036

  10. Visual Adaptation

    PubMed Central

    Webster, Michael A.

    2015-01-01

    Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985

  11. A Feedfordward Adaptive Controller to Reduce the Imaging Time of Large-Sized Biological Samples with a SPM-Based Multiprobe Station

    PubMed Central

    Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel

    2012-01-01

    The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491

  12. Peers as Resources for Learning: A Situated Learning Approach to Adapted Physical Activity in Rehabilitation

    ERIC Educational Resources Information Center

    Standal, Oyvind F.; Jespersen, Ejgil

    2008-01-01

    The purpose of this study was to investigate the learning that takes place when people with disabilities interact in a rehabilitation context. Data were generated through in-depth interviews and close observations in a 2 one-half week-long rehabilitation program, where the participants learned both wheelchair skills and adapted physical…

  13. Values and Subjective Mental Health in America: A Social Adaptation Approach.

    ERIC Educational Resources Information Center

    Kahle, Lynn R.; And Others

    Although surveys of mental health involve some controversy, a significant relationship between values and mental health appears to exist. To study the adaption of individuals with alternative values to their psychological worlds, over 2,000 adults identified their most important values. Alcohol abuse, drug abuse, dizziness, anxiety, and general…

  14. Can Approaches to Research in Art and Design Be Beneficially Adapted for Research into Higher Education?

    ERIC Educational Resources Information Center

    Trowler, Paul

    2013-01-01

    This paper examines the research practices in Art and Design that are distinctively different from those common in research into higher education outside those fields. It considers whether and what benefit could be derived from their adaptation by the latter. The paper also examines the factors that are conducive and obstructive to adaptive…

  15. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  16. A Multiple Objective Test Assembly Approach for Exposure Control Problems in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.

    2010-01-01

    Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…

  17. Difference, Adapted Physical Activity and Human Development: Potential Contribution of Capabilities Approach

    ERIC Educational Resources Information Center

    Silva, Carla Filomena; Howe, P. David

    2012-01-01

    This paper is a call to Adapted Physical Activity (APA) professionals to increase the reflexive nature of their practice. Drawing upon Foucault's concept of governmentality (1977) APA action may work against its own publicized goals of empowerment and self-determination. To highlight these inconsistencies, we will draw upon historical and social…

  18. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by